00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2471 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3732 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.145 Using shallow fetch with depth 1 00:00:00.145 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.145 > git --version # timeout=10 00:00:00.190 > git --version # 'git version 2.39.2' 00:00:00.190 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.823 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.838 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.850 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.851 > git config core.sparsecheckout # timeout=10 00:00:04.861 > git read-tree -mu HEAD # timeout=10 00:00:04.877 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.896 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.896 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.006 [Pipeline] Start of Pipeline 00:00:05.021 [Pipeline] library 00:00:05.022 Loading library shm_lib@master 00:00:05.023 Library shm_lib@master is cached. Copying from home. 00:00:05.040 [Pipeline] node 00:00:05.054 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.056 [Pipeline] { 00:00:05.067 [Pipeline] catchError 00:00:05.068 [Pipeline] { 00:00:05.082 [Pipeline] wrap 00:00:05.090 [Pipeline] { 00:00:05.098 [Pipeline] stage 00:00:05.100 [Pipeline] { (Prologue) 00:00:05.116 [Pipeline] echo 00:00:05.118 Node: VM-host-SM38 00:00:05.122 [Pipeline] cleanWs 00:00:05.132 [WS-CLEANUP] Deleting project workspace... 00:00:05.132 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.138 [WS-CLEANUP] done 00:00:05.327 [Pipeline] setCustomBuildProperty 00:00:05.417 [Pipeline] httpRequest 00:00:05.785 [Pipeline] echo 00:00:05.787 Sorcerer 10.211.164.20 is alive 00:00:05.796 [Pipeline] retry 00:00:05.798 [Pipeline] { 00:00:05.809 [Pipeline] httpRequest 00:00:05.814 HttpMethod: GET 00:00:05.815 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.815 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.816 Response Code: HTTP/1.1 200 OK 00:00:05.817 Success: Status code 200 is in the accepted range: 200,404 00:00:05.817 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.405 [Pipeline] } 00:00:06.427 [Pipeline] // retry 00:00:06.434 [Pipeline] sh 00:00:06.719 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.737 [Pipeline] httpRequest 00:00:07.102 [Pipeline] echo 00:00:07.103 Sorcerer 10.211.164.20 is alive 00:00:07.112 [Pipeline] retry 00:00:07.113 [Pipeline] { 00:00:07.123 [Pipeline] httpRequest 00:00:07.127 HttpMethod: GET 00:00:07.128 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.128 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.140 Response Code: HTTP/1.1 200 OK 00:00:07.140 Success: Status code 200 is in the accepted range: 200,404 00:00:07.141 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:18.726 [Pipeline] } 00:01:18.744 [Pipeline] // retry 00:01:18.752 [Pipeline] sh 00:01:19.045 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:22.359 [Pipeline] sh 00:01:22.640 + git -C spdk log --oneline -n5 00:01:22.640 c13c99a5e test: Various fixes for Fedora40 00:01:22.640 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:22.640 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:22.640 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:22.640 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:22.658 [Pipeline] writeFile 00:01:22.672 [Pipeline] sh 00:01:22.959 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:22.973 [Pipeline] sh 00:01:23.276 + cat autorun-spdk.conf 00:01:23.276 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.276 SPDK_TEST_NVME=1 00:01:23.276 SPDK_TEST_FTL=1 00:01:23.276 SPDK_TEST_ISAL=1 00:01:23.276 SPDK_RUN_ASAN=1 00:01:23.276 SPDK_RUN_UBSAN=1 00:01:23.276 SPDK_TEST_XNVME=1 00:01:23.276 SPDK_TEST_NVME_FDP=1 00:01:23.276 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.284 RUN_NIGHTLY=1 00:01:23.286 [Pipeline] } 00:01:23.299 [Pipeline] // stage 00:01:23.313 [Pipeline] stage 00:01:23.314 [Pipeline] { (Run VM) 00:01:23.327 [Pipeline] sh 00:01:23.612 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:23.612 + echo 'Start stage prepare_nvme.sh' 00:01:23.612 Start stage prepare_nvme.sh 00:01:23.612 + [[ -n 4 ]] 00:01:23.612 + disk_prefix=ex4 00:01:23.612 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:23.612 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:23.612 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:23.612 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.612 ++ SPDK_TEST_NVME=1 00:01:23.612 ++ SPDK_TEST_FTL=1 00:01:23.612 ++ SPDK_TEST_ISAL=1 00:01:23.612 ++ SPDK_RUN_ASAN=1 00:01:23.612 ++ SPDK_RUN_UBSAN=1 00:01:23.612 ++ SPDK_TEST_XNVME=1 00:01:23.612 ++ SPDK_TEST_NVME_FDP=1 00:01:23.612 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.612 ++ RUN_NIGHTLY=1 00:01:23.612 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:23.612 + nvme_files=() 00:01:23.612 + declare -A nvme_files 00:01:23.612 + backend_dir=/var/lib/libvirt/images/backends 00:01:23.612 + nvme_files['nvme.img']=5G 00:01:23.612 + nvme_files['nvme-cmb.img']=5G 00:01:23.612 + nvme_files['nvme-multi0.img']=4G 00:01:23.612 + nvme_files['nvme-multi1.img']=4G 00:01:23.612 + nvme_files['nvme-multi2.img']=4G 00:01:23.612 + nvme_files['nvme-openstack.img']=8G 00:01:23.612 + nvme_files['nvme-zns.img']=5G 00:01:23.612 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:23.612 + (( SPDK_TEST_FTL == 1 )) 00:01:23.612 + nvme_files["nvme-ftl.img"]=6G 00:01:23.612 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:23.612 + nvme_files["nvme-fdp.img"]=1G 00:01:23.612 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:23.612 + for nvme in "${!nvme_files[@]}" 00:01:23.612 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:23.612 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.612 + for nvme in "${!nvme_files[@]}" 00:01:23.612 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:23.873 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:23.873 + for nvme in "${!nvme_files[@]}" 00:01:23.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:23.873 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.873 + for nvme in "${!nvme_files[@]}" 00:01:23.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:23.873 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:23.873 + for nvme in "${!nvme_files[@]}" 00:01:23.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:24.813 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.813 + for nvme in "${!nvme_files[@]}" 00:01:24.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:24.813 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.813 + for nvme in "${!nvme_files[@]}" 00:01:24.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:24.813 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.813 + for nvme in "${!nvme_files[@]}" 00:01:24.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:24.813 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:24.813 + for nvme in "${!nvme_files[@]}" 00:01:24.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:25.384 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.384 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:25.384 + echo 'End stage prepare_nvme.sh' 00:01:25.384 End stage prepare_nvme.sh 00:01:25.394 [Pipeline] sh 00:01:25.676 + DISTRO=fedora39 00:01:25.677 + CPUS=10 00:01:25.677 + RAM=12288 00:01:25.677 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:25.677 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:25.677 00:01:25.677 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:25.677 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:25.677 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:25.677 HELP=0 00:01:25.677 DRY_RUN=0 00:01:25.677 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:25.677 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:25.677 NVME_AUTO_CREATE=0 00:01:25.677 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:25.677 NVME_CMB=,,,, 00:01:25.677 NVME_PMR=,,,, 00:01:25.677 NVME_ZNS=,,,, 00:01:25.677 NVME_MS=true,,,, 00:01:25.677 NVME_FDP=,,,on, 00:01:25.677 SPDK_VAGRANT_DISTRO=fedora39 00:01:25.677 SPDK_VAGRANT_VMCPU=10 00:01:25.677 SPDK_VAGRANT_VMRAM=12288 00:01:25.677 SPDK_VAGRANT_PROVIDER=libvirt 00:01:25.677 SPDK_VAGRANT_HTTP_PROXY= 00:01:25.677 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:25.677 SPDK_OPENSTACK_NETWORK=0 00:01:25.677 VAGRANT_PACKAGE_BOX=0 00:01:25.677 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:25.677 FORCE_DISTRO=true 00:01:25.677 VAGRANT_BOX_VERSION= 00:01:25.677 EXTRA_VAGRANTFILES= 00:01:25.677 NIC_MODEL=e1000 00:01:25.677 00:01:25.677 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:25.677 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:28.225 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.487 ==> default: Creating image (snapshot of base box volume). 00:01:28.749 ==> default: Creating domain with the following settings... 00:01:28.749 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734378875_85a49b1f8d1d759c44e8 00:01:28.749 ==> default: -- Domain type: kvm 00:01:28.749 ==> default: -- Cpus: 10 00:01:28.749 ==> default: -- Feature: acpi 00:01:28.749 ==> default: -- Feature: apic 00:01:28.749 ==> default: -- Feature: pae 00:01:28.749 ==> default: -- Memory: 12288M 00:01:28.749 ==> default: -- Memory Backing: hugepages: 00:01:28.749 ==> default: -- Management MAC: 00:01:28.749 ==> default: -- Loader: 00:01:28.749 ==> default: -- Nvram: 00:01:28.749 ==> default: -- Base box: spdk/fedora39 00:01:28.749 ==> default: -- Storage pool: default 00:01:28.749 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734378875_85a49b1f8d1d759c44e8.img (20G) 00:01:28.749 ==> default: -- Volume Cache: default 00:01:28.749 ==> default: -- Kernel: 00:01:28.749 ==> default: -- Initrd: 00:01:28.749 ==> default: -- Graphics Type: vnc 00:01:28.749 ==> default: -- Graphics Port: -1 00:01:28.749 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.749 ==> default: -- Graphics Password: Not defined 00:01:28.749 ==> default: -- Video Type: cirrus 00:01:28.749 ==> default: -- Video VRAM: 9216 00:01:28.749 ==> default: -- Sound Type: 00:01:28.749 ==> default: -- Keymap: en-us 00:01:28.749 ==> default: -- TPM Path: 00:01:28.749 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.749 ==> default: -- Command line args: 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:28.749 ==> default: -> value=-drive, 00:01:28.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:28.749 ==> default: -> value=-drive, 00:01:28.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:28.749 ==> default: -> value=-drive, 00:01:28.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.749 ==> default: -> value=-drive, 00:01:28.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.749 ==> default: -> value=-drive, 00:01:28.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:28.749 ==> default: -> value=-drive, 00:01:28.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:28.749 ==> default: -> value=-device, 00:01:28.749 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.010 ==> default: Creating shared folders metadata... 00:01:29.010 ==> default: Starting domain. 00:01:30.922 ==> default: Waiting for domain to get an IP address... 00:01:49.032 ==> default: Waiting for SSH to become available... 00:01:49.032 ==> default: Configuring and enabling network interfaces... 00:01:51.581 default: SSH address: 192.168.121.193:22 00:01:51.581 default: SSH username: vagrant 00:01:51.581 default: SSH auth method: private key 00:01:54.143 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.286 ==> default: Mounting SSHFS shared folder... 00:02:04.253 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.253 ==> default: Checking Mount.. 00:02:05.196 ==> default: Folder Successfully Mounted! 00:02:05.196 00:02:05.196 SUCCESS! 00:02:05.196 00:02:05.196 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.196 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.196 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.196 00:02:05.207 [Pipeline] } 00:02:05.221 [Pipeline] // stage 00:02:05.231 [Pipeline] dir 00:02:05.231 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:05.233 [Pipeline] { 00:02:05.246 [Pipeline] catchError 00:02:05.247 [Pipeline] { 00:02:05.260 [Pipeline] sh 00:02:05.544 + vagrant ssh-config --host vagrant 00:02:05.544 + sed -ne '/^Host/,$p' 00:02:05.544 + tee ssh_conf 00:02:08.093 Host vagrant 00:02:08.093 HostName 192.168.121.193 00:02:08.093 User vagrant 00:02:08.093 Port 22 00:02:08.093 UserKnownHostsFile /dev/null 00:02:08.093 StrictHostKeyChecking no 00:02:08.093 PasswordAuthentication no 00:02:08.093 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:08.093 IdentitiesOnly yes 00:02:08.093 LogLevel FATAL 00:02:08.093 ForwardAgent yes 00:02:08.093 ForwardX11 yes 00:02:08.093 00:02:08.108 [Pipeline] withEnv 00:02:08.110 [Pipeline] { 00:02:08.123 [Pipeline] sh 00:02:08.407 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:08.408 source /etc/os-release 00:02:08.408 [[ -e /image.version ]] && img=$(< /image.version) 00:02:08.408 # Minimal, systemd-like check. 00:02:08.408 if [[ -e /.dockerenv ]]; then 00:02:08.408 # Clear garbage from the node'\''s name: 00:02:08.408 # agt-er_autotest_547-896 -> autotest_547-896 00:02:08.408 # $HOSTNAME is the actual container id 00:02:08.408 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:08.408 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:08.408 # We can assume this is a mount from a host where container is running, 00:02:08.408 # so fetch its hostname to easily identify the target swarm worker. 00:02:08.408 container="$(< /etc/hostname) ($agent)" 00:02:08.408 else 00:02:08.408 # Fallback 00:02:08.408 container=$agent 00:02:08.408 fi 00:02:08.408 fi 00:02:08.408 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:08.408 ' 00:02:08.421 [Pipeline] } 00:02:08.436 [Pipeline] // withEnv 00:02:08.445 [Pipeline] setCustomBuildProperty 00:02:08.459 [Pipeline] stage 00:02:08.462 [Pipeline] { (Tests) 00:02:08.478 [Pipeline] sh 00:02:08.762 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.039 [Pipeline] sh 00:02:09.323 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:09.339 [Pipeline] timeout 00:02:09.339 Timeout set to expire in 50 min 00:02:09.341 [Pipeline] { 00:02:09.355 [Pipeline] sh 00:02:09.640 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:10.212 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:10.226 [Pipeline] sh 00:02:10.510 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:10.787 [Pipeline] sh 00:02:11.071 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.088 [Pipeline] sh 00:02:11.373 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:11.373 ++ readlink -f spdk_repo 00:02:11.373 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:11.373 + [[ -n /home/vagrant/spdk_repo ]] 00:02:11.373 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:11.373 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:11.373 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:11.373 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:11.373 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:11.373 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:11.373 + cd /home/vagrant/spdk_repo 00:02:11.373 + source /etc/os-release 00:02:11.373 ++ NAME='Fedora Linux' 00:02:11.373 ++ VERSION='39 (Cloud Edition)' 00:02:11.373 ++ ID=fedora 00:02:11.373 ++ VERSION_ID=39 00:02:11.373 ++ VERSION_CODENAME= 00:02:11.373 ++ PLATFORM_ID=platform:f39 00:02:11.373 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.373 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.373 ++ LOGO=fedora-logo-icon 00:02:11.373 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.373 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.373 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.373 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.373 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.373 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.373 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.373 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.373 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.373 ++ SUPPORT_END=2024-11-12 00:02:11.373 ++ VARIANT='Cloud Edition' 00:02:11.373 ++ VARIANT_ID=cloud 00:02:11.373 + uname -a 00:02:11.373 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:11.373 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:11.635 Hugepages 00:02:11.635 node hugesize free / total 00:02:11.635 node0 1048576kB 0 / 0 00:02:11.635 node0 2048kB 0 / 0 00:02:11.635 00:02:11.635 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.635 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:11.635 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:11.635 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:11.635 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:11.635 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:11.635 + rm -f /tmp/spdk-ld-path 00:02:11.635 + source autorun-spdk.conf 00:02:11.635 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.635 ++ SPDK_TEST_NVME=1 00:02:11.635 ++ SPDK_TEST_FTL=1 00:02:11.635 ++ SPDK_TEST_ISAL=1 00:02:11.635 ++ SPDK_RUN_ASAN=1 00:02:11.635 ++ SPDK_RUN_UBSAN=1 00:02:11.635 ++ SPDK_TEST_XNVME=1 00:02:11.635 ++ SPDK_TEST_NVME_FDP=1 00:02:11.635 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.635 ++ RUN_NIGHTLY=1 00:02:11.635 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.635 + [[ -n '' ]] 00:02:11.635 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:11.635 + for M in /var/spdk/build-*-manifest.txt 00:02:11.635 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:11.635 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.635 + for M in /var/spdk/build-*-manifest.txt 00:02:11.635 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.635 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.635 + for M in /var/spdk/build-*-manifest.txt 00:02:11.635 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.635 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.635 ++ uname 00:02:11.635 + [[ Linux == \L\i\n\u\x ]] 00:02:11.635 + sudo dmesg -T 00:02:11.952 + sudo dmesg --clear 00:02:11.952 + dmesg_pid=5001 00:02:11.952 + [[ Fedora Linux == FreeBSD ]] 00:02:11.952 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.952 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.952 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.952 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.952 + sudo dmesg -Tw 00:02:11.952 + export FIO_BIN=/usr/src/fio-static/fio 00:02:11.952 + FIO_BIN=/usr/src/fio-static/fio 00:02:11.952 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.952 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.952 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.952 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.952 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.952 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.952 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.952 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.952 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.952 Test configuration: 00:02:11.952 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.952 SPDK_TEST_NVME=1 00:02:11.952 SPDK_TEST_FTL=1 00:02:11.952 SPDK_TEST_ISAL=1 00:02:11.952 SPDK_RUN_ASAN=1 00:02:11.952 SPDK_RUN_UBSAN=1 00:02:11.952 SPDK_TEST_XNVME=1 00:02:11.952 SPDK_TEST_NVME_FDP=1 00:02:11.952 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.952 RUN_NIGHTLY=1 19:55:19 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:11.952 19:55:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:11.952 19:55:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.952 19:55:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.952 19:55:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.952 19:55:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.952 19:55:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.952 19:55:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.952 19:55:19 -- paths/export.sh@5 -- $ export PATH 00:02:11.952 19:55:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.952 19:55:19 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:11.952 19:55:19 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:11.952 19:55:19 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734378919.XXXXXX 00:02:11.952 19:55:19 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734378919.h8BG6M 00:02:11.952 19:55:19 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:11.952 19:55:19 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:11.952 19:55:19 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:11.952 19:55:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:11.952 19:55:19 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.952 19:55:19 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:11.952 19:55:19 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:11.952 19:55:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.952 19:55:19 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:11.952 19:55:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.952 19:55:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.952 19:55:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.952 19:55:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.952 Mon Dec 16 07:55:19 PM UTC 2024 00:02:11.952 19:55:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.952 LTS-67-gc13c99a5e 00:02:11.952 19:55:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:11.952 19:55:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:11.952 19:55:19 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:11.952 19:55:19 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:11.952 19:55:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.952 ************************************ 00:02:11.952 START TEST asan 00:02:11.952 ************************************ 00:02:11.952 using asan 00:02:11.952 19:55:19 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:11.952 00:02:11.952 real 0m0.000s 00:02:11.952 user 0m0.000s 00:02:11.952 sys 0m0.000s 00:02:11.952 19:55:19 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:11.952 19:55:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.952 ************************************ 00:02:11.952 END TEST asan 00:02:11.952 ************************************ 00:02:11.952 19:55:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.952 19:55:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.952 19:55:19 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:11.952 19:55:19 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:11.952 19:55:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.952 ************************************ 00:02:11.952 START TEST ubsan 00:02:11.952 ************************************ 00:02:11.952 using ubsan 00:02:11.952 19:55:19 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:11.952 00:02:11.952 real 0m0.000s 00:02:11.952 user 0m0.000s 00:02:11.952 sys 0m0.000s 00:02:11.952 19:55:19 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:11.952 19:55:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.952 ************************************ 00:02:11.952 END TEST ubsan 00:02:11.952 ************************************ 00:02:11.952 19:55:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.952 19:55:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.952 19:55:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.952 19:55:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.952 19:55:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.952 19:55:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.952 19:55:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.952 19:55:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.952 19:55:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:11.952 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.952 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.229 Using 'verbs' RDMA provider 00:02:23.159 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:33.145 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:33.145 Creating mk/config.mk...done. 00:02:33.145 Creating mk/cc.flags.mk...done. 00:02:33.145 Type 'make' to build. 00:02:33.145 19:55:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:33.145 19:55:40 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:33.145 19:55:40 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.145 19:55:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.145 ************************************ 00:02:33.145 START TEST make 00:02:33.145 ************************************ 00:02:33.145 19:55:40 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:33.145 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:33.145 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:33.145 meson setup builddir \ 00:02:33.145 -Dwith-libaio=enabled \ 00:02:33.145 -Dwith-liburing=enabled \ 00:02:33.145 -Dwith-libvfn=disabled \ 00:02:33.145 -Dwith-spdk=false && \ 00:02:33.145 meson compile -C builddir && \ 00:02:33.145 cd -) 00:02:33.145 make[1]: Nothing to be done for 'all'. 00:02:35.676 The Meson build system 00:02:35.676 Version: 1.5.0 00:02:35.676 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:35.676 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:35.676 Build type: native build 00:02:35.676 Project name: xnvme 00:02:35.676 Project version: 0.7.3 00:02:35.676 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:35.676 C linker for the host machine: cc ld.bfd 2.40-14 00:02:35.676 Host machine cpu family: x86_64 00:02:35.676 Host machine cpu: x86_64 00:02:35.676 Message: host_machine.system: linux 00:02:35.676 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:35.676 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:35.676 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.676 Run-time dependency threads found: YES 00:02:35.676 Has header "setupapi.h" : NO 00:02:35.676 Has header "linux/blkzoned.h" : YES 00:02:35.676 Has header "linux/blkzoned.h" : YES (cached) 00:02:35.676 Has header "libaio.h" : YES 00:02:35.676 Library aio found: YES 00:02:35.676 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:35.676 Run-time dependency liburing found: YES 2.2 00:02:35.676 Dependency libvfn skipped: feature with-libvfn disabled 00:02:35.676 Run-time dependency appleframeworks found: NO (tried framework) 00:02:35.676 Run-time dependency appleframeworks found: NO (tried framework) 00:02:35.676 Configuring xnvme_config.h using configuration 00:02:35.676 Configuring xnvme.spec using configuration 00:02:35.676 Run-time dependency bash-completion found: YES 2.11 00:02:35.676 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:35.676 Program cp found: YES (/usr/bin/cp) 00:02:35.676 Has header "winsock2.h" : NO 00:02:35.676 Has header "dbghelp.h" : NO 00:02:35.676 Library rpcrt4 found: NO 00:02:35.676 Library rt found: YES 00:02:35.676 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:35.676 Found CMake: /usr/bin/cmake (3.27.7) 00:02:35.676 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:35.676 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:35.676 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:35.676 Build targets in project: 32 00:02:35.676 00:02:35.676 xnvme 0.7.3 00:02:35.676 00:02:35.676 User defined options 00:02:35.676 with-libaio : enabled 00:02:35.676 with-liburing: enabled 00:02:35.676 with-libvfn : disabled 00:02:35.676 with-spdk : false 00:02:35.676 00:02:35.676 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.676 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:35.676 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:35.676 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:35.676 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:35.676 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:35.676 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:35.676 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:35.676 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:35.934 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:35.934 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:35.934 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:35.934 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:35.934 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:35.934 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:35.934 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:35.934 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:35.934 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:35.934 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:35.934 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:35.934 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:35.934 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:35.934 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:35.934 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:35.934 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:35.934 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:35.934 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:35.934 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:35.934 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:35.934 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:35.934 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:35.934 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:35.934 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:35.934 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:35.934 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:36.192 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:36.192 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:36.192 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:36.192 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:36.192 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:36.192 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:36.192 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:36.192 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:36.192 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:36.192 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:36.192 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:36.192 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:36.192 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:36.192 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:36.192 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:36.192 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:36.192 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:36.192 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:36.192 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:36.192 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:36.192 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:36.192 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:36.192 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:36.192 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:36.192 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:36.192 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:36.192 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:36.192 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:36.192 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:36.192 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:36.192 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:36.192 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:36.192 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:36.192 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:36.192 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:36.451 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:36.451 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:36.451 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:36.451 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:36.451 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:36.451 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:36.451 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:36.451 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:36.451 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:36.451 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:36.451 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:36.451 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:36.451 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:36.451 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:36.451 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:36.451 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:36.709 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:36.709 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:36.709 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:36.709 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:36.709 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:36.709 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:36.709 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:36.709 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:36.709 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:36.709 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:36.709 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:36.709 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:36.709 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:36.709 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:36.709 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:36.709 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:36.709 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:36.710 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:36.710 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:36.710 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:36.710 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:36.710 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:36.710 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:36.710 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:36.710 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:36.710 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:36.710 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:36.710 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:36.710 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:36.710 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:36.710 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:36.710 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:36.710 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:36.710 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:36.710 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:36.710 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:36.710 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:36.710 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:36.981 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:36.981 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:36.981 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:36.981 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:36.981 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:36.981 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:36.981 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:36.981 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:36.981 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:36.981 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:36.981 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:36.981 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:36.982 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:36.982 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:36.982 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:36.982 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:36.982 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:36.982 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:36.982 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:36.982 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:36.982 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:37.255 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:37.255 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:37.255 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:37.255 [147/203] Linking target lib/libxnvme.so 00:02:37.255 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:37.255 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:37.255 [150/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:37.255 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:37.255 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:37.255 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:37.255 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:37.255 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:37.255 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:37.255 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:37.255 [158/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:37.255 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:37.255 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:37.255 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:37.255 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:37.514 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:37.514 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:37.514 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:37.514 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:37.514 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:37.514 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:37.514 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:37.514 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:37.514 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:37.514 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:37.772 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:37.772 [174/203] Linking static target lib/libxnvme.a 00:02:37.772 [175/203] Linking target tests/xnvme_tests_buf 00:02:37.772 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:37.772 [177/203] Linking target tests/xnvme_tests_scc 00:02:37.772 [178/203] Linking target tests/xnvme_tests_cli 00:02:37.772 [179/203] Linking target tests/xnvme_tests_enum 00:02:37.772 [180/203] Linking target tests/xnvme_tests_znd_state 00:02:37.772 [181/203] Linking target tests/xnvme_tests_lblk 00:02:37.772 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:02:37.772 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:37.772 [184/203] Linking target tests/xnvme_tests_ioworker 00:02:37.772 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:37.772 [186/203] Linking target tests/xnvme_tests_znd_append 00:02:37.772 [187/203] Linking target tests/xnvme_tests_kvs 00:02:37.772 [188/203] Linking target tests/xnvme_tests_map 00:02:37.772 [189/203] Linking target tools/xdd 00:02:37.772 [190/203] Linking target tools/zoned 00:02:37.772 [191/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:37.772 [192/203] Linking target tools/lblk 00:02:37.772 [193/203] Linking target tools/xnvme 00:02:37.772 [194/203] Linking target examples/xnvme_enum 00:02:37.772 [195/203] Linking target tools/xnvme_file 00:02:37.772 [196/203] Linking target examples/xnvme_hello 00:02:37.772 [197/203] Linking target tools/kvs 00:02:37.772 [198/203] Linking target examples/xnvme_io_async 00:02:37.772 [199/203] Linking target examples/xnvme_dev 00:02:37.772 [200/203] Linking target examples/xnvme_single_async 00:02:37.772 [201/203] Linking target examples/zoned_io_sync 00:02:37.772 [202/203] Linking target examples/zoned_io_async 00:02:37.772 [203/203] Linking target examples/xnvme_single_sync 00:02:37.772 INFO: autodetecting backend as ninja 00:02:37.772 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:37.772 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:43.040 The Meson build system 00:02:43.040 Version: 1.5.0 00:02:43.040 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:43.040 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:43.040 Build type: native build 00:02:43.040 Program cat found: YES (/usr/bin/cat) 00:02:43.040 Project name: DPDK 00:02:43.040 Project version: 23.11.0 00:02:43.040 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.040 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.040 Host machine cpu family: x86_64 00:02:43.040 Host machine cpu: x86_64 00:02:43.040 Message: ## Building in Developer Mode ## 00:02:43.040 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.040 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:43.040 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.040 Program python3 found: YES (/usr/bin/python3) 00:02:43.040 Program cat found: YES (/usr/bin/cat) 00:02:43.040 Compiler for C supports arguments -march=native: YES 00:02:43.040 Checking for size of "void *" : 8 00:02:43.040 Checking for size of "void *" : 8 (cached) 00:02:43.040 Library m found: YES 00:02:43.040 Library numa found: YES 00:02:43.040 Has header "numaif.h" : YES 00:02:43.040 Library fdt found: NO 00:02:43.040 Library execinfo found: NO 00:02:43.040 Has header "execinfo.h" : YES 00:02:43.040 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.040 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.040 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.040 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.040 Run-time dependency openssl found: YES 3.1.1 00:02:43.040 Run-time dependency libpcap found: YES 1.10.4 00:02:43.040 Has header "pcap.h" with dependency libpcap: YES 00:02:43.040 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.040 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.040 Compiler for C supports arguments -Wformat: YES 00:02:43.040 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.040 Compiler for C supports arguments -Wformat-security: NO 00:02:43.040 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.040 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.040 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.040 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.040 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.040 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.040 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.040 Compiler for C supports arguments -Wundef: YES 00:02:43.040 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.040 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.040 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.040 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.040 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.040 Program objdump found: YES (/usr/bin/objdump) 00:02:43.040 Compiler for C supports arguments -mavx512f: YES 00:02:43.040 Checking if "AVX512 checking" compiles: YES 00:02:43.040 Fetching value of define "__SSE4_2__" : 1 00:02:43.040 Fetching value of define "__AES__" : 1 00:02:43.041 Fetching value of define "__AVX__" : 1 00:02:43.041 Fetching value of define "__AVX2__" : 1 00:02:43.041 Fetching value of define "__AVX512BW__" : 1 00:02:43.041 Fetching value of define "__AVX512CD__" : 1 00:02:43.041 Fetching value of define "__AVX512DQ__" : 1 00:02:43.041 Fetching value of define "__AVX512F__" : 1 00:02:43.041 Fetching value of define "__AVX512VL__" : 1 00:02:43.041 Fetching value of define "__PCLMUL__" : 1 00:02:43.041 Fetching value of define "__RDRND__" : 1 00:02:43.041 Fetching value of define "__RDSEED__" : 1 00:02:43.041 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:43.041 Fetching value of define "__znver1__" : (undefined) 00:02:43.041 Fetching value of define "__znver2__" : (undefined) 00:02:43.041 Fetching value of define "__znver3__" : (undefined) 00:02:43.041 Fetching value of define "__znver4__" : (undefined) 00:02:43.041 Library asan found: YES 00:02:43.041 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.041 Message: lib/log: Defining dependency "log" 00:02:43.041 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.041 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.041 Library rt found: YES 00:02:43.041 Checking for function "getentropy" : NO 00:02:43.041 Message: lib/eal: Defining dependency "eal" 00:02:43.041 Message: lib/ring: Defining dependency "ring" 00:02:43.041 Message: lib/rcu: Defining dependency "rcu" 00:02:43.041 Message: lib/mempool: Defining dependency "mempool" 00:02:43.041 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.041 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.041 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.041 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.041 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.041 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.041 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:43.041 Compiler for C supports arguments -mpclmul: YES 00:02:43.041 Compiler for C supports arguments -maes: YES 00:02:43.041 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.041 Compiler for C supports arguments -mavx512bw: YES 00:02:43.041 Compiler for C supports arguments -mavx512dq: YES 00:02:43.041 Compiler for C supports arguments -mavx512vl: YES 00:02:43.041 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.041 Compiler for C supports arguments -mavx2: YES 00:02:43.041 Compiler for C supports arguments -mavx: YES 00:02:43.041 Message: lib/net: Defining dependency "net" 00:02:43.041 Message: lib/meter: Defining dependency "meter" 00:02:43.041 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.041 Message: lib/pci: Defining dependency "pci" 00:02:43.041 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.041 Message: lib/hash: Defining dependency "hash" 00:02:43.041 Message: lib/timer: Defining dependency "timer" 00:02:43.041 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.041 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.041 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.041 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.041 Message: lib/power: Defining dependency "power" 00:02:43.041 Message: lib/reorder: Defining dependency "reorder" 00:02:43.041 Message: lib/security: Defining dependency "security" 00:02:43.041 Has header "linux/userfaultfd.h" : YES 00:02:43.041 Has header "linux/vduse.h" : YES 00:02:43.041 Message: lib/vhost: Defining dependency "vhost" 00:02:43.041 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.041 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.041 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.041 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.041 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:43.041 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:43.041 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:43.041 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:43.041 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:43.041 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:43.041 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.041 Configuring doxy-api-html.conf using configuration 00:02:43.041 Configuring doxy-api-man.conf using configuration 00:02:43.041 Program mandb found: YES (/usr/bin/mandb) 00:02:43.041 Program sphinx-build found: NO 00:02:43.041 Configuring rte_build_config.h using configuration 00:02:43.041 Message: 00:02:43.041 ================= 00:02:43.041 Applications Enabled 00:02:43.041 ================= 00:02:43.041 00:02:43.041 apps: 00:02:43.041 00:02:43.041 00:02:43.041 Message: 00:02:43.041 ================= 00:02:43.041 Libraries Enabled 00:02:43.041 ================= 00:02:43.041 00:02:43.041 libs: 00:02:43.041 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:43.041 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:43.041 cryptodev, dmadev, power, reorder, security, vhost, 00:02:43.041 00:02:43.041 Message: 00:02:43.041 =============== 00:02:43.041 Drivers Enabled 00:02:43.041 =============== 00:02:43.041 00:02:43.041 common: 00:02:43.041 00:02:43.041 bus: 00:02:43.041 pci, vdev, 00:02:43.041 mempool: 00:02:43.041 ring, 00:02:43.041 dma: 00:02:43.041 00:02:43.041 net: 00:02:43.041 00:02:43.041 crypto: 00:02:43.041 00:02:43.041 compress: 00:02:43.041 00:02:43.041 vdpa: 00:02:43.041 00:02:43.041 00:02:43.041 Message: 00:02:43.041 ================= 00:02:43.041 Content Skipped 00:02:43.041 ================= 00:02:43.041 00:02:43.041 apps: 00:02:43.041 dumpcap: explicitly disabled via build config 00:02:43.041 graph: explicitly disabled via build config 00:02:43.041 pdump: explicitly disabled via build config 00:02:43.041 proc-info: explicitly disabled via build config 00:02:43.041 test-acl: explicitly disabled via build config 00:02:43.041 test-bbdev: explicitly disabled via build config 00:02:43.041 test-cmdline: explicitly disabled via build config 00:02:43.041 test-compress-perf: explicitly disabled via build config 00:02:43.041 test-crypto-perf: explicitly disabled via build config 00:02:43.041 test-dma-perf: explicitly disabled via build config 00:02:43.041 test-eventdev: explicitly disabled via build config 00:02:43.041 test-fib: explicitly disabled via build config 00:02:43.041 test-flow-perf: explicitly disabled via build config 00:02:43.041 test-gpudev: explicitly disabled via build config 00:02:43.041 test-mldev: explicitly disabled via build config 00:02:43.041 test-pipeline: explicitly disabled via build config 00:02:43.041 test-pmd: explicitly disabled via build config 00:02:43.041 test-regex: explicitly disabled via build config 00:02:43.041 test-sad: explicitly disabled via build config 00:02:43.041 test-security-perf: explicitly disabled via build config 00:02:43.041 00:02:43.041 libs: 00:02:43.041 metrics: explicitly disabled via build config 00:02:43.041 acl: explicitly disabled via build config 00:02:43.041 bbdev: explicitly disabled via build config 00:02:43.041 bitratestats: explicitly disabled via build config 00:02:43.041 bpf: explicitly disabled via build config 00:02:43.041 cfgfile: explicitly disabled via build config 00:02:43.041 distributor: explicitly disabled via build config 00:02:43.041 efd: explicitly disabled via build config 00:02:43.041 eventdev: explicitly disabled via build config 00:02:43.041 dispatcher: explicitly disabled via build config 00:02:43.041 gpudev: explicitly disabled via build config 00:02:43.041 gro: explicitly disabled via build config 00:02:43.041 gso: explicitly disabled via build config 00:02:43.041 ip_frag: explicitly disabled via build config 00:02:43.041 jobstats: explicitly disabled via build config 00:02:43.041 latencystats: explicitly disabled via build config 00:02:43.041 lpm: explicitly disabled via build config 00:02:43.041 member: explicitly disabled via build config 00:02:43.041 pcapng: explicitly disabled via build config 00:02:43.041 rawdev: explicitly disabled via build config 00:02:43.041 regexdev: explicitly disabled via build config 00:02:43.041 mldev: explicitly disabled via build config 00:02:43.041 rib: explicitly disabled via build config 00:02:43.041 sched: explicitly disabled via build config 00:02:43.041 stack: explicitly disabled via build config 00:02:43.041 ipsec: explicitly disabled via build config 00:02:43.041 pdcp: explicitly disabled via build config 00:02:43.041 fib: explicitly disabled via build config 00:02:43.041 port: explicitly disabled via build config 00:02:43.041 pdump: explicitly disabled via build config 00:02:43.041 table: explicitly disabled via build config 00:02:43.041 pipeline: explicitly disabled via build config 00:02:43.041 graph: explicitly disabled via build config 00:02:43.041 node: explicitly disabled via build config 00:02:43.041 00:02:43.041 drivers: 00:02:43.041 common/cpt: not in enabled drivers build config 00:02:43.041 common/dpaax: not in enabled drivers build config 00:02:43.041 common/iavf: not in enabled drivers build config 00:02:43.041 common/idpf: not in enabled drivers build config 00:02:43.041 common/mvep: not in enabled drivers build config 00:02:43.041 common/octeontx: not in enabled drivers build config 00:02:43.041 bus/auxiliary: not in enabled drivers build config 00:02:43.041 bus/cdx: not in enabled drivers build config 00:02:43.041 bus/dpaa: not in enabled drivers build config 00:02:43.041 bus/fslmc: not in enabled drivers build config 00:02:43.041 bus/ifpga: not in enabled drivers build config 00:02:43.041 bus/platform: not in enabled drivers build config 00:02:43.041 bus/vmbus: not in enabled drivers build config 00:02:43.041 common/cnxk: not in enabled drivers build config 00:02:43.041 common/mlx5: not in enabled drivers build config 00:02:43.041 common/nfp: not in enabled drivers build config 00:02:43.041 common/qat: not in enabled drivers build config 00:02:43.041 common/sfc_efx: not in enabled drivers build config 00:02:43.041 mempool/bucket: not in enabled drivers build config 00:02:43.041 mempool/cnxk: not in enabled drivers build config 00:02:43.041 mempool/dpaa: not in enabled drivers build config 00:02:43.041 mempool/dpaa2: not in enabled drivers build config 00:02:43.041 mempool/octeontx: not in enabled drivers build config 00:02:43.041 mempool/stack: not in enabled drivers build config 00:02:43.041 dma/cnxk: not in enabled drivers build config 00:02:43.041 dma/dpaa: not in enabled drivers build config 00:02:43.041 dma/dpaa2: not in enabled drivers build config 00:02:43.041 dma/hisilicon: not in enabled drivers build config 00:02:43.041 dma/idxd: not in enabled drivers build config 00:02:43.041 dma/ioat: not in enabled drivers build config 00:02:43.041 dma/skeleton: not in enabled drivers build config 00:02:43.041 net/af_packet: not in enabled drivers build config 00:02:43.042 net/af_xdp: not in enabled drivers build config 00:02:43.042 net/ark: not in enabled drivers build config 00:02:43.042 net/atlantic: not in enabled drivers build config 00:02:43.042 net/avp: not in enabled drivers build config 00:02:43.042 net/axgbe: not in enabled drivers build config 00:02:43.042 net/bnx2x: not in enabled drivers build config 00:02:43.042 net/bnxt: not in enabled drivers build config 00:02:43.042 net/bonding: not in enabled drivers build config 00:02:43.042 net/cnxk: not in enabled drivers build config 00:02:43.042 net/cpfl: not in enabled drivers build config 00:02:43.042 net/cxgbe: not in enabled drivers build config 00:02:43.042 net/dpaa: not in enabled drivers build config 00:02:43.042 net/dpaa2: not in enabled drivers build config 00:02:43.042 net/e1000: not in enabled drivers build config 00:02:43.042 net/ena: not in enabled drivers build config 00:02:43.042 net/enetc: not in enabled drivers build config 00:02:43.042 net/enetfec: not in enabled drivers build config 00:02:43.042 net/enic: not in enabled drivers build config 00:02:43.042 net/failsafe: not in enabled drivers build config 00:02:43.042 net/fm10k: not in enabled drivers build config 00:02:43.042 net/gve: not in enabled drivers build config 00:02:43.042 net/hinic: not in enabled drivers build config 00:02:43.042 net/hns3: not in enabled drivers build config 00:02:43.042 net/i40e: not in enabled drivers build config 00:02:43.042 net/iavf: not in enabled drivers build config 00:02:43.042 net/ice: not in enabled drivers build config 00:02:43.042 net/idpf: not in enabled drivers build config 00:02:43.042 net/igc: not in enabled drivers build config 00:02:43.042 net/ionic: not in enabled drivers build config 00:02:43.042 net/ipn3ke: not in enabled drivers build config 00:02:43.042 net/ixgbe: not in enabled drivers build config 00:02:43.042 net/mana: not in enabled drivers build config 00:02:43.042 net/memif: not in enabled drivers build config 00:02:43.042 net/mlx4: not in enabled drivers build config 00:02:43.042 net/mlx5: not in enabled drivers build config 00:02:43.042 net/mvneta: not in enabled drivers build config 00:02:43.042 net/mvpp2: not in enabled drivers build config 00:02:43.042 net/netvsc: not in enabled drivers build config 00:02:43.042 net/nfb: not in enabled drivers build config 00:02:43.042 net/nfp: not in enabled drivers build config 00:02:43.042 net/ngbe: not in enabled drivers build config 00:02:43.042 net/null: not in enabled drivers build config 00:02:43.042 net/octeontx: not in enabled drivers build config 00:02:43.042 net/octeon_ep: not in enabled drivers build config 00:02:43.042 net/pcap: not in enabled drivers build config 00:02:43.042 net/pfe: not in enabled drivers build config 00:02:43.042 net/qede: not in enabled drivers build config 00:02:43.042 net/ring: not in enabled drivers build config 00:02:43.042 net/sfc: not in enabled drivers build config 00:02:43.042 net/softnic: not in enabled drivers build config 00:02:43.042 net/tap: not in enabled drivers build config 00:02:43.042 net/thunderx: not in enabled drivers build config 00:02:43.042 net/txgbe: not in enabled drivers build config 00:02:43.042 net/vdev_netvsc: not in enabled drivers build config 00:02:43.042 net/vhost: not in enabled drivers build config 00:02:43.042 net/virtio: not in enabled drivers build config 00:02:43.042 net/vmxnet3: not in enabled drivers build config 00:02:43.042 raw/*: missing internal dependency, "rawdev" 00:02:43.042 crypto/armv8: not in enabled drivers build config 00:02:43.042 crypto/bcmfs: not in enabled drivers build config 00:02:43.042 crypto/caam_jr: not in enabled drivers build config 00:02:43.042 crypto/ccp: not in enabled drivers build config 00:02:43.042 crypto/cnxk: not in enabled drivers build config 00:02:43.042 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.042 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.042 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.042 crypto/mlx5: not in enabled drivers build config 00:02:43.042 crypto/mvsam: not in enabled drivers build config 00:02:43.042 crypto/nitrox: not in enabled drivers build config 00:02:43.042 crypto/null: not in enabled drivers build config 00:02:43.042 crypto/octeontx: not in enabled drivers build config 00:02:43.042 crypto/openssl: not in enabled drivers build config 00:02:43.042 crypto/scheduler: not in enabled drivers build config 00:02:43.042 crypto/uadk: not in enabled drivers build config 00:02:43.042 crypto/virtio: not in enabled drivers build config 00:02:43.042 compress/isal: not in enabled drivers build config 00:02:43.042 compress/mlx5: not in enabled drivers build config 00:02:43.042 compress/octeontx: not in enabled drivers build config 00:02:43.042 compress/zlib: not in enabled drivers build config 00:02:43.042 regex/*: missing internal dependency, "regexdev" 00:02:43.042 ml/*: missing internal dependency, "mldev" 00:02:43.042 vdpa/ifc: not in enabled drivers build config 00:02:43.042 vdpa/mlx5: not in enabled drivers build config 00:02:43.042 vdpa/nfp: not in enabled drivers build config 00:02:43.042 vdpa/sfc: not in enabled drivers build config 00:02:43.042 event/*: missing internal dependency, "eventdev" 00:02:43.042 baseband/*: missing internal dependency, "bbdev" 00:02:43.042 gpu/*: missing internal dependency, "gpudev" 00:02:43.042 00:02:43.042 00:02:43.042 Build targets in project: 84 00:02:43.042 00:02:43.042 DPDK 23.11.0 00:02:43.042 00:02:43.042 User defined options 00:02:43.042 buildtype : debug 00:02:43.042 default_library : shared 00:02:43.042 libdir : lib 00:02:43.042 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:43.042 b_sanitize : address 00:02:43.042 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:43.042 c_link_args : 00:02:43.042 cpu_instruction_set: native 00:02:43.042 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:43.042 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:43.042 enable_docs : false 00:02:43.042 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:43.042 enable_kmods : false 00:02:43.042 tests : false 00:02:43.042 00:02:43.042 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.042 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:43.042 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.042 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.042 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.042 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.042 [5/264] Linking static target lib/librte_kvargs.a 00:02:43.042 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.042 [7/264] Linking static target lib/librte_log.a 00:02:43.042 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.302 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.302 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.302 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.562 [12/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.562 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.562 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.562 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.562 [16/264] Linking static target lib/librte_telemetry.a 00:02:43.562 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.820 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.820 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.820 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.820 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.820 [22/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.820 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.820 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.820 [25/264] Linking target lib/librte_log.so.24.0 00:02:44.079 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.079 [27/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:44.079 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.079 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.079 [30/264] Linking target lib/librte_kvargs.so.24.0 00:02:44.079 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.338 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.338 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.338 [34/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:44.338 [35/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.338 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.338 [37/264] Linking target lib/librte_telemetry.so.24.0 00:02:44.338 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.338 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.338 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.338 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.338 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.338 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.596 [44/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:44.596 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.596 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.596 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.854 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.854 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.854 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.854 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.854 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.854 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.854 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.854 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.112 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.112 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.112 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.112 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.112 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.112 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.112 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.112 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.112 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.370 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.370 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.370 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.370 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.370 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.370 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.370 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.628 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.628 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.628 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.628 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.628 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.628 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.628 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.628 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.887 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.887 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.887 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.887 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.887 [84/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.887 [85/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.887 [86/264] Linking static target lib/librte_ring.a 00:02:46.147 [87/264] Linking static target lib/librte_eal.a 00:02:46.147 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.147 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.147 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.147 [91/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.147 [92/264] Linking static target lib/librte_rcu.a 00:02:46.406 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.406 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.406 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.406 [96/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.406 [97/264] Linking static target lib/librte_mempool.a 00:02:46.665 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.665 [99/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.665 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.665 [101/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.665 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.665 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.927 [104/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.927 [105/264] Linking static target lib/librte_mbuf.a 00:02:46.927 [106/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.927 [107/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:46.927 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.927 [109/264] Linking static target lib/librte_net.a 00:02:46.927 [110/264] Linking static target lib/librte_meter.a 00:02:47.185 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:47.185 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:47.185 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.443 [114/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.443 [115/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.443 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.443 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.700 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.700 [119/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.958 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.958 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.958 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:48.216 [123/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.216 [124/264] Linking static target lib/librte_pci.a 00:02:48.216 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.216 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.216 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.216 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.216 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.216 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.216 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.216 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.216 [133/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.474 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:48.474 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.474 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.474 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.474 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.474 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.474 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.474 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.474 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.732 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.732 [144/264] Linking static target lib/librte_cmdline.a 00:02:48.732 [145/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.732 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.732 [147/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.732 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.732 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.732 [150/264] Linking static target lib/librte_timer.a 00:02:48.989 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.989 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.989 [153/264] Linking static target lib/librte_ethdev.a 00:02:49.247 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.247 [155/264] Linking static target lib/librte_compressdev.a 00:02:49.247 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.247 [157/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:49.247 [158/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.247 [159/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.247 [160/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.247 [161/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.505 [162/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:49.505 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.505 [164/264] Linking static target lib/librte_hash.a 00:02:49.505 [165/264] Linking static target lib/librte_dmadev.a 00:02:49.505 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.505 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.505 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.763 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.763 [170/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.763 [171/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.763 [172/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.763 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.021 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.021 [175/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:50.021 [176/264] Linking static target lib/librte_cryptodev.a 00:02:50.021 [177/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.021 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.021 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:50.021 [180/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.279 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.279 [182/264] Linking static target lib/librte_power.a 00:02:50.279 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.279 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.279 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.279 [186/264] Linking static target lib/librte_reorder.a 00:02:50.537 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.538 [188/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.538 [189/264] Linking static target lib/librte_security.a 00:02:50.796 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.796 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.054 [192/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.054 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.054 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.054 [195/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.054 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.054 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.312 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.312 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.312 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.312 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.312 [202/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.312 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.312 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.569 [205/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.569 [206/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.569 [207/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.569 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.569 [209/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.569 [210/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.569 [211/264] Linking static target drivers/librte_bus_pci.a 00:02:51.569 [212/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.828 [213/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.828 [214/264] Linking static target drivers/librte_bus_vdev.a 00:02:51.828 [215/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.828 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.828 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.828 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.828 [219/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.828 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.828 [221/264] Linking static target drivers/librte_mempool_ring.a 00:02:51.828 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.828 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.395 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.826 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.826 [226/264] Linking target lib/librte_eal.so.24.0 00:02:53.826 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:53.826 [228/264] Linking target lib/librte_meter.so.24.0 00:02:53.826 [229/264] Linking target lib/librte_ring.so.24.0 00:02:53.826 [230/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:53.826 [231/264] Linking target lib/librte_pci.so.24.0 00:02:53.826 [232/264] Linking target lib/librte_dmadev.so.24.0 00:02:53.826 [233/264] Linking target lib/librte_timer.so.24.0 00:02:53.826 [234/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:53.826 [235/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:53.826 [236/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:53.826 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:53.826 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:53.826 [239/264] Linking target lib/librte_rcu.so.24.0 00:02:53.826 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:53.826 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:54.083 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:54.083 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:54.083 [244/264] Linking target lib/librte_mbuf.so.24.0 00:02:54.083 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:54.083 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:54.083 [247/264] Linking target lib/librte_cryptodev.so.24.0 00:02:54.083 [248/264] Linking target lib/librte_reorder.so.24.0 00:02:54.083 [249/264] Linking target lib/librte_compressdev.so.24.0 00:02:54.083 [250/264] Linking target lib/librte_net.so.24.0 00:02:54.341 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:54.341 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:54.341 [253/264] Linking target lib/librte_hash.so.24.0 00:02:54.341 [254/264] Linking target lib/librte_security.so.24.0 00:02:54.341 [255/264] Linking target lib/librte_cmdline.so.24.0 00:02:54.341 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:54.600 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.600 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:54.600 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:54.600 [260/264] Linking target lib/librte_power.so.24.0 00:02:55.165 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.165 [262/264] Linking static target lib/librte_vhost.a 00:02:56.547 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.808 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:56.808 INFO: autodetecting backend as ninja 00:02:56.808 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.438 CC lib/ut/ut.o 00:02:57.438 CC lib/ut_mock/mock.o 00:02:57.710 CC lib/log/log.o 00:02:57.710 CC lib/log/log_flags.o 00:02:57.710 CC lib/log/log_deprecated.o 00:02:57.710 LIB libspdk_ut_mock.a 00:02:57.710 SO libspdk_ut_mock.so.5.0 00:02:57.710 LIB libspdk_ut.a 00:02:57.710 LIB libspdk_log.a 00:02:57.710 SO libspdk_ut.so.1.0 00:02:57.710 SO libspdk_log.so.6.1 00:02:57.710 SYMLINK libspdk_ut_mock.so 00:02:57.710 SYMLINK libspdk_ut.so 00:02:57.710 SYMLINK libspdk_log.so 00:02:57.972 CC lib/util/bit_array.o 00:02:57.972 CC lib/util/base64.o 00:02:57.972 CXX lib/trace_parser/trace.o 00:02:57.972 CC lib/ioat/ioat.o 00:02:57.972 CC lib/util/cpuset.o 00:02:57.972 CC lib/util/crc16.o 00:02:57.972 CC lib/util/crc32.o 00:02:57.972 CC lib/util/crc32c.o 00:02:57.972 CC lib/dma/dma.o 00:02:57.972 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.972 CC lib/util/crc32_ieee.o 00:02:57.972 CC lib/util/crc64.o 00:02:57.972 CC lib/util/dif.o 00:02:57.972 CC lib/util/fd.o 00:02:57.972 LIB libspdk_dma.a 00:02:57.972 CC lib/util/file.o 00:02:57.972 SO libspdk_dma.so.3.0 00:02:57.972 CC lib/util/hexlify.o 00:02:58.231 CC lib/vfio_user/host/vfio_user.o 00:02:58.231 CC lib/util/iov.o 00:02:58.231 SYMLINK libspdk_dma.so 00:02:58.231 CC lib/util/math.o 00:02:58.231 LIB libspdk_ioat.a 00:02:58.231 CC lib/util/pipe.o 00:02:58.231 SO libspdk_ioat.so.6.0 00:02:58.231 CC lib/util/strerror_tls.o 00:02:58.231 CC lib/util/string.o 00:02:58.231 CC lib/util/uuid.o 00:02:58.231 SYMLINK libspdk_ioat.so 00:02:58.231 CC lib/util/fd_group.o 00:02:58.231 CC lib/util/xor.o 00:02:58.231 CC lib/util/zipf.o 00:02:58.231 LIB libspdk_vfio_user.a 00:02:58.231 SO libspdk_vfio_user.so.4.0 00:02:58.231 SYMLINK libspdk_vfio_user.so 00:02:58.489 LIB libspdk_util.a 00:02:58.747 SO libspdk_util.so.8.0 00:02:58.747 LIB libspdk_trace_parser.a 00:02:58.747 SYMLINK libspdk_util.so 00:02:58.747 SO libspdk_trace_parser.so.4.0 00:02:58.747 CC lib/json/json_parse.o 00:02:58.747 CC lib/json/json_util.o 00:02:58.747 CC lib/json/json_write.o 00:02:58.747 CC lib/rdma/common.o 00:02:58.747 CC lib/rdma/rdma_verbs.o 00:02:58.747 CC lib/idxd/idxd.o 00:02:58.747 CC lib/vmd/vmd.o 00:02:58.747 CC lib/env_dpdk/env.o 00:02:58.747 CC lib/conf/conf.o 00:02:58.747 SYMLINK libspdk_trace_parser.so 00:02:58.747 CC lib/vmd/led.o 00:02:59.007 CC lib/idxd/idxd_user.o 00:02:59.007 CC lib/idxd/idxd_kernel.o 00:02:59.007 CC lib/env_dpdk/memory.o 00:02:59.007 LIB libspdk_conf.a 00:02:59.007 SO libspdk_conf.so.5.0 00:02:59.007 CC lib/env_dpdk/pci.o 00:02:59.007 LIB libspdk_rdma.a 00:02:59.007 SYMLINK libspdk_conf.so 00:02:59.007 CC lib/env_dpdk/init.o 00:02:59.007 SO libspdk_rdma.so.5.0 00:02:59.007 LIB libspdk_json.a 00:02:59.265 CC lib/env_dpdk/threads.o 00:02:59.265 SO libspdk_json.so.5.1 00:02:59.265 SYMLINK libspdk_rdma.so 00:02:59.265 CC lib/env_dpdk/pci_ioat.o 00:02:59.265 CC lib/env_dpdk/pci_virtio.o 00:02:59.265 SYMLINK libspdk_json.so 00:02:59.266 CC lib/env_dpdk/pci_vmd.o 00:02:59.266 CC lib/env_dpdk/pci_idxd.o 00:02:59.266 CC lib/env_dpdk/pci_event.o 00:02:59.266 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.266 CC lib/env_dpdk/sigbus_handler.o 00:02:59.266 CC lib/env_dpdk/pci_dpdk.o 00:02:59.523 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:59.523 LIB libspdk_idxd.a 00:02:59.523 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:59.523 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.523 SO libspdk_idxd.so.11.0 00:02:59.523 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.523 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:59.523 LIB libspdk_vmd.a 00:02:59.523 SYMLINK libspdk_idxd.so 00:02:59.523 SO libspdk_vmd.so.5.0 00:02:59.523 SYMLINK libspdk_vmd.so 00:02:59.781 LIB libspdk_jsonrpc.a 00:02:59.781 SO libspdk_jsonrpc.so.5.1 00:02:59.781 SYMLINK libspdk_jsonrpc.so 00:03:00.039 CC lib/rpc/rpc.o 00:03:00.039 LIB libspdk_rpc.a 00:03:00.039 SO libspdk_rpc.so.5.0 00:03:00.297 SYMLINK libspdk_rpc.so 00:03:00.297 LIB libspdk_env_dpdk.a 00:03:00.297 SO libspdk_env_dpdk.so.13.0 00:03:00.297 CC lib/trace/trace_flags.o 00:03:00.297 CC lib/trace/trace.o 00:03:00.297 CC lib/trace/trace_rpc.o 00:03:00.297 CC lib/sock/sock.o 00:03:00.297 CC lib/sock/sock_rpc.o 00:03:00.297 CC lib/notify/notify.o 00:03:00.297 CC lib/notify/notify_rpc.o 00:03:00.297 SYMLINK libspdk_env_dpdk.so 00:03:00.556 LIB libspdk_notify.a 00:03:00.556 SO libspdk_notify.so.5.0 00:03:00.556 SYMLINK libspdk_notify.so 00:03:00.556 LIB libspdk_trace.a 00:03:00.556 SO libspdk_trace.so.9.0 00:03:00.556 SYMLINK libspdk_trace.so 00:03:00.556 LIB libspdk_sock.a 00:03:00.814 SO libspdk_sock.so.8.0 00:03:00.814 CC lib/thread/thread.o 00:03:00.814 CC lib/thread/iobuf.o 00:03:00.814 SYMLINK libspdk_sock.so 00:03:00.814 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.814 CC lib/nvme/nvme_fabric.o 00:03:00.814 CC lib/nvme/nvme_ctrlr.o 00:03:00.814 CC lib/nvme/nvme_ns.o 00:03:00.814 CC lib/nvme/nvme_ns_cmd.o 00:03:00.814 CC lib/nvme/nvme_pcie_common.o 00:03:00.814 CC lib/nvme/nvme_qpair.o 00:03:00.814 CC lib/nvme/nvme_pcie.o 00:03:01.071 CC lib/nvme/nvme.o 00:03:01.637 CC lib/nvme/nvme_quirks.o 00:03:01.637 CC lib/nvme/nvme_transport.o 00:03:01.637 CC lib/nvme/nvme_discovery.o 00:03:01.637 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.637 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.637 CC lib/nvme/nvme_tcp.o 00:03:01.637 CC lib/nvme/nvme_opal.o 00:03:01.895 CC lib/nvme/nvme_io_msg.o 00:03:01.895 CC lib/nvme/nvme_poll_group.o 00:03:01.895 CC lib/nvme/nvme_zns.o 00:03:01.895 CC lib/nvme/nvme_cuse.o 00:03:02.153 CC lib/nvme/nvme_vfio_user.o 00:03:02.153 CC lib/nvme/nvme_rdma.o 00:03:02.153 LIB libspdk_thread.a 00:03:02.153 SO libspdk_thread.so.9.0 00:03:02.411 SYMLINK libspdk_thread.so 00:03:02.411 CC lib/virtio/virtio.o 00:03:02.411 CC lib/init/json_config.o 00:03:02.411 CC lib/accel/accel.o 00:03:02.411 CC lib/blob/blobstore.o 00:03:02.411 CC lib/blob/request.o 00:03:02.668 CC lib/blob/zeroes.o 00:03:02.668 CC lib/init/subsystem.o 00:03:02.668 CC lib/virtio/virtio_vhost_user.o 00:03:02.668 CC lib/virtio/virtio_vfio_user.o 00:03:02.668 CC lib/init/subsystem_rpc.o 00:03:02.668 CC lib/init/rpc.o 00:03:02.668 CC lib/blob/blob_bs_dev.o 00:03:02.668 CC lib/accel/accel_rpc.o 00:03:02.926 CC lib/accel/accel_sw.o 00:03:02.926 LIB libspdk_init.a 00:03:02.926 SO libspdk_init.so.4.0 00:03:02.926 CC lib/virtio/virtio_pci.o 00:03:02.926 SYMLINK libspdk_init.so 00:03:03.185 CC lib/event/reactor.o 00:03:03.185 CC lib/event/app.o 00:03:03.185 CC lib/event/scheduler_static.o 00:03:03.185 CC lib/event/log_rpc.o 00:03:03.185 CC lib/event/app_rpc.o 00:03:03.185 LIB libspdk_accel.a 00:03:03.185 SO libspdk_accel.so.14.0 00:03:03.185 LIB libspdk_virtio.a 00:03:03.185 SYMLINK libspdk_accel.so 00:03:03.185 SO libspdk_virtio.so.6.0 00:03:03.443 SYMLINK libspdk_virtio.so 00:03:03.443 CC lib/bdev/bdev.o 00:03:03.443 CC lib/bdev/part.o 00:03:03.443 CC lib/bdev/scsi_nvme.o 00:03:03.443 CC lib/bdev/bdev_rpc.o 00:03:03.443 CC lib/bdev/bdev_zone.o 00:03:03.443 LIB libspdk_nvme.a 00:03:03.443 LIB libspdk_event.a 00:03:03.443 SO libspdk_event.so.12.0 00:03:03.702 SYMLINK libspdk_event.so 00:03:03.702 SO libspdk_nvme.so.12.0 00:03:03.702 SYMLINK libspdk_nvme.so 00:03:05.636 LIB libspdk_blob.a 00:03:05.636 SO libspdk_blob.so.10.1 00:03:05.636 SYMLINK libspdk_blob.so 00:03:05.636 CC lib/blobfs/blobfs.o 00:03:05.636 CC lib/blobfs/tree.o 00:03:05.636 CC lib/lvol/lvol.o 00:03:05.893 LIB libspdk_bdev.a 00:03:06.154 SO libspdk_bdev.so.14.0 00:03:06.154 SYMLINK libspdk_bdev.so 00:03:06.154 LIB libspdk_blobfs.a 00:03:06.154 CC lib/nbd/nbd.o 00:03:06.154 CC lib/nbd/nbd_rpc.o 00:03:06.154 CC lib/scsi/dev.o 00:03:06.154 CC lib/scsi/lun.o 00:03:06.154 CC lib/scsi/port.o 00:03:06.154 CC lib/nvmf/ctrlr.o 00:03:06.154 CC lib/ublk/ublk.o 00:03:06.154 CC lib/ftl/ftl_core.o 00:03:06.154 SO libspdk_blobfs.so.9.0 00:03:06.415 SYMLINK libspdk_blobfs.so 00:03:06.415 CC lib/ublk/ublk_rpc.o 00:03:06.415 CC lib/scsi/scsi.o 00:03:06.415 CC lib/scsi/scsi_bdev.o 00:03:06.415 CC lib/scsi/scsi_pr.o 00:03:06.415 LIB libspdk_lvol.a 00:03:06.415 SO libspdk_lvol.so.9.1 00:03:06.415 CC lib/scsi/scsi_rpc.o 00:03:06.415 CC lib/scsi/task.o 00:03:06.415 CC lib/ftl/ftl_init.o 00:03:06.415 SYMLINK libspdk_lvol.so 00:03:06.415 CC lib/nvmf/ctrlr_discovery.o 00:03:06.676 CC lib/ftl/ftl_layout.o 00:03:06.676 CC lib/ftl/ftl_debug.o 00:03:06.676 LIB libspdk_nbd.a 00:03:06.676 CC lib/nvmf/ctrlr_bdev.o 00:03:06.676 SO libspdk_nbd.so.6.0 00:03:06.676 CC lib/nvmf/subsystem.o 00:03:06.676 SYMLINK libspdk_nbd.so 00:03:06.676 CC lib/nvmf/nvmf.o 00:03:06.676 CC lib/ftl/ftl_io.o 00:03:06.676 CC lib/ftl/ftl_sb.o 00:03:06.936 LIB libspdk_ublk.a 00:03:06.936 CC lib/ftl/ftl_l2p.o 00:03:06.936 LIB libspdk_scsi.a 00:03:06.936 SO libspdk_ublk.so.2.0 00:03:06.936 CC lib/nvmf/nvmf_rpc.o 00:03:06.936 SO libspdk_scsi.so.8.0 00:03:06.936 SYMLINK libspdk_ublk.so 00:03:06.936 CC lib/nvmf/transport.o 00:03:06.936 CC lib/ftl/ftl_l2p_flat.o 00:03:06.936 CC lib/ftl/ftl_nv_cache.o 00:03:06.936 SYMLINK libspdk_scsi.so 00:03:06.936 CC lib/ftl/ftl_band.o 00:03:07.196 CC lib/iscsi/conn.o 00:03:07.196 CC lib/iscsi/init_grp.o 00:03:07.196 CC lib/nvmf/tcp.o 00:03:07.456 CC lib/nvmf/rdma.o 00:03:07.457 CC lib/iscsi/iscsi.o 00:03:07.457 CC lib/iscsi/md5.o 00:03:07.457 CC lib/iscsi/param.o 00:03:07.716 CC lib/ftl/ftl_band_ops.o 00:03:07.716 CC lib/iscsi/portal_grp.o 00:03:07.716 CC lib/vhost/vhost.o 00:03:07.716 CC lib/vhost/vhost_rpc.o 00:03:07.976 CC lib/iscsi/tgt_node.o 00:03:07.976 CC lib/iscsi/iscsi_subsystem.o 00:03:07.976 CC lib/iscsi/iscsi_rpc.o 00:03:07.976 CC lib/vhost/vhost_scsi.o 00:03:07.976 CC lib/ftl/ftl_writer.o 00:03:08.236 CC lib/ftl/ftl_rq.o 00:03:08.236 CC lib/iscsi/task.o 00:03:08.236 CC lib/ftl/ftl_reloc.o 00:03:08.236 CC lib/vhost/vhost_blk.o 00:03:08.236 CC lib/ftl/ftl_l2p_cache.o 00:03:08.236 CC lib/vhost/rte_vhost_user.o 00:03:08.236 CC lib/ftl/ftl_p2l.o 00:03:08.496 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.496 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.496 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.496 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.496 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.757 LIB libspdk_iscsi.a 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.757 SO libspdk_iscsi.so.7.0 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.757 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.017 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.017 CC lib/ftl/utils/ftl_conf.o 00:03:09.017 SYMLINK libspdk_iscsi.so 00:03:09.017 CC lib/ftl/utils/ftl_md.o 00:03:09.017 CC lib/ftl/utils/ftl_mempool.o 00:03:09.017 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.017 CC lib/ftl/utils/ftl_property.o 00:03:09.017 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.017 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.017 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.017 LIB libspdk_vhost.a 00:03:09.278 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.278 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.278 SO libspdk_vhost.so.7.1 00:03:09.278 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.279 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.279 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.279 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.279 SYMLINK libspdk_vhost.so 00:03:09.279 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.279 CC lib/ftl/base/ftl_base_dev.o 00:03:09.279 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.279 CC lib/ftl/ftl_trace.o 00:03:09.279 LIB libspdk_nvmf.a 00:03:09.540 LIB libspdk_ftl.a 00:03:09.540 SO libspdk_nvmf.so.17.0 00:03:09.540 SYMLINK libspdk_nvmf.so 00:03:09.540 SO libspdk_ftl.so.8.0 00:03:09.802 SYMLINK libspdk_ftl.so 00:03:10.064 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.064 CC module/sock/posix/posix.o 00:03:10.064 CC module/accel/ioat/accel_ioat.o 00:03:10.064 CC module/blob/bdev/blob_bdev.o 00:03:10.064 CC module/accel/iaa/accel_iaa.o 00:03:10.064 CC module/accel/error/accel_error.o 00:03:10.064 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.064 CC module/accel/dsa/accel_dsa.o 00:03:10.064 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.064 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.064 LIB libspdk_env_dpdk_rpc.a 00:03:10.064 SO libspdk_env_dpdk_rpc.so.5.0 00:03:10.064 LIB libspdk_scheduler_gscheduler.a 00:03:10.064 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.064 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.064 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.064 SO libspdk_scheduler_gscheduler.so.3.0 00:03:10.064 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:10.064 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.064 CC module/accel/error/accel_error_rpc.o 00:03:10.064 LIB libspdk_blob_bdev.a 00:03:10.325 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.325 LIB libspdk_scheduler_dynamic.a 00:03:10.325 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.325 SO libspdk_blob_bdev.so.10.1 00:03:10.325 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.325 SO libspdk_scheduler_dynamic.so.3.0 00:03:10.325 SYMLINK libspdk_blob_bdev.so 00:03:10.325 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.325 LIB libspdk_accel_dsa.a 00:03:10.325 LIB libspdk_accel_ioat.a 00:03:10.325 LIB libspdk_accel_error.a 00:03:10.325 SO libspdk_accel_dsa.so.4.0 00:03:10.325 LIB libspdk_accel_iaa.a 00:03:10.325 SO libspdk_accel_ioat.so.5.0 00:03:10.325 SO libspdk_accel_error.so.1.0 00:03:10.325 SO libspdk_accel_iaa.so.2.0 00:03:10.325 SYMLINK libspdk_accel_dsa.so 00:03:10.325 SYMLINK libspdk_accel_ioat.so 00:03:10.325 SYMLINK libspdk_accel_error.so 00:03:10.325 SYMLINK libspdk_accel_iaa.so 00:03:10.325 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.325 CC module/bdev/delay/vbdev_delay.o 00:03:10.325 CC module/bdev/error/vbdev_error.o 00:03:10.325 CC module/bdev/lvol/vbdev_lvol.o 00:03:10.325 CC module/bdev/gpt/gpt.o 00:03:10.325 CC module/bdev/malloc/bdev_malloc.o 00:03:10.586 CC module/bdev/null/bdev_null.o 00:03:10.586 CC module/bdev/passthru/vbdev_passthru.o 00:03:10.586 CC module/bdev/nvme/bdev_nvme.o 00:03:10.586 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:10.586 LIB libspdk_sock_posix.a 00:03:10.586 CC module/bdev/gpt/vbdev_gpt.o 00:03:10.586 SO libspdk_sock_posix.so.5.0 00:03:10.586 SYMLINK libspdk_sock_posix.so 00:03:10.586 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.586 CC module/bdev/error/vbdev_error_rpc.o 00:03:10.586 LIB libspdk_blobfs_bdev.a 00:03:10.586 CC module/bdev/null/bdev_null_rpc.o 00:03:10.586 SO libspdk_blobfs_bdev.so.5.0 00:03:10.846 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.846 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.846 LIB libspdk_bdev_error.a 00:03:10.846 SYMLINK libspdk_blobfs_bdev.so 00:03:10.846 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:10.846 SO libspdk_bdev_error.so.5.0 00:03:10.846 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:10.846 LIB libspdk_bdev_gpt.a 00:03:10.846 SYMLINK libspdk_bdev_error.so 00:03:10.846 SO libspdk_bdev_gpt.so.5.0 00:03:10.846 LIB libspdk_bdev_null.a 00:03:10.846 CC module/bdev/raid/bdev_raid.o 00:03:10.846 SO libspdk_bdev_null.so.5.0 00:03:10.846 LIB libspdk_bdev_delay.a 00:03:10.846 LIB libspdk_bdev_passthru.a 00:03:10.846 SYMLINK libspdk_bdev_gpt.so 00:03:10.846 SO libspdk_bdev_delay.so.5.0 00:03:10.846 SO libspdk_bdev_passthru.so.5.0 00:03:10.846 CC module/bdev/split/vbdev_split.o 00:03:10.846 SYMLINK libspdk_bdev_null.so 00:03:10.846 CC module/bdev/nvme/nvme_rpc.o 00:03:10.846 SYMLINK libspdk_bdev_passthru.so 00:03:10.846 LIB libspdk_bdev_malloc.a 00:03:10.846 SYMLINK libspdk_bdev_delay.so 00:03:10.846 CC module/bdev/split/vbdev_split_rpc.o 00:03:11.106 SO libspdk_bdev_malloc.so.5.0 00:03:11.106 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.106 CC module/bdev/xnvme/bdev_xnvme.o 00:03:11.106 SYMLINK libspdk_bdev_malloc.so 00:03:11.106 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.106 LIB libspdk_bdev_lvol.a 00:03:11.106 SO libspdk_bdev_lvol.so.5.0 00:03:11.106 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.106 LIB libspdk_bdev_split.a 00:03:11.106 SYMLINK libspdk_bdev_lvol.so 00:03:11.106 SO libspdk_bdev_split.so.5.0 00:03:11.106 CC module/bdev/raid/raid0.o 00:03:11.106 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.106 SYMLINK libspdk_bdev_split.so 00:03:11.106 CC module/bdev/nvme/vbdev_opal.o 00:03:11.106 CC module/bdev/raid/raid1.o 00:03:11.367 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:11.367 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:11.367 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:11.367 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.367 CC module/bdev/raid/concat.o 00:03:11.367 CC module/bdev/aio/bdev_aio.o 00:03:11.367 LIB libspdk_bdev_zone_block.a 00:03:11.367 LIB libspdk_bdev_xnvme.a 00:03:11.367 SO libspdk_bdev_zone_block.so.5.0 00:03:11.367 CC module/bdev/aio/bdev_aio_rpc.o 00:03:11.367 SO libspdk_bdev_xnvme.so.2.0 00:03:11.367 CC module/bdev/ftl/bdev_ftl.o 00:03:11.367 SYMLINK libspdk_bdev_zone_block.so 00:03:11.367 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:11.628 SYMLINK libspdk_bdev_xnvme.so 00:03:11.628 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.628 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:11.628 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.628 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:11.628 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.628 LIB libspdk_bdev_raid.a 00:03:11.628 SO libspdk_bdev_raid.so.5.0 00:03:11.628 LIB libspdk_bdev_aio.a 00:03:11.628 SO libspdk_bdev_aio.so.5.0 00:03:11.628 SYMLINK libspdk_bdev_raid.so 00:03:11.628 SYMLINK libspdk_bdev_aio.so 00:03:11.628 LIB libspdk_bdev_ftl.a 00:03:11.628 SO libspdk_bdev_ftl.so.5.0 00:03:11.892 SYMLINK libspdk_bdev_ftl.so 00:03:11.892 LIB libspdk_bdev_iscsi.a 00:03:11.892 SO libspdk_bdev_iscsi.so.5.0 00:03:11.892 SYMLINK libspdk_bdev_iscsi.so 00:03:11.892 LIB libspdk_bdev_virtio.a 00:03:11.892 SO libspdk_bdev_virtio.so.5.0 00:03:12.152 SYMLINK libspdk_bdev_virtio.so 00:03:12.724 LIB libspdk_bdev_nvme.a 00:03:12.985 SO libspdk_bdev_nvme.so.6.0 00:03:12.985 SYMLINK libspdk_bdev_nvme.so 00:03:13.245 CC module/event/subsystems/vmd/vmd.o 00:03:13.245 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.245 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.245 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.245 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.245 CC module/event/subsystems/sock/sock.o 00:03:13.245 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.245 LIB libspdk_event_sock.a 00:03:13.245 LIB libspdk_event_vmd.a 00:03:13.245 LIB libspdk_event_scheduler.a 00:03:13.245 SO libspdk_event_sock.so.4.0 00:03:13.245 SO libspdk_event_vmd.so.5.0 00:03:13.245 SO libspdk_event_scheduler.so.3.0 00:03:13.505 SYMLINK libspdk_event_sock.so 00:03:13.505 LIB libspdk_event_vhost_blk.a 00:03:13.505 SYMLINK libspdk_event_vmd.so 00:03:13.505 LIB libspdk_event_iobuf.a 00:03:13.505 SYMLINK libspdk_event_scheduler.so 00:03:13.505 SO libspdk_event_vhost_blk.so.2.0 00:03:13.505 SO libspdk_event_iobuf.so.2.0 00:03:13.505 SYMLINK libspdk_event_vhost_blk.so 00:03:13.506 SYMLINK libspdk_event_iobuf.so 00:03:13.506 CC module/event/subsystems/accel/accel.o 00:03:13.803 LIB libspdk_event_accel.a 00:03:13.803 SO libspdk_event_accel.so.5.0 00:03:13.803 SYMLINK libspdk_event_accel.so 00:03:14.064 CC module/event/subsystems/bdev/bdev.o 00:03:14.064 LIB libspdk_event_bdev.a 00:03:14.064 SO libspdk_event_bdev.so.5.0 00:03:14.325 SYMLINK libspdk_event_bdev.so 00:03:14.325 CC module/event/subsystems/ublk/ublk.o 00:03:14.325 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.325 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.325 CC module/event/subsystems/nbd/nbd.o 00:03:14.325 CC module/event/subsystems/scsi/scsi.o 00:03:14.325 LIB libspdk_event_ublk.a 00:03:14.325 LIB libspdk_event_nbd.a 00:03:14.586 SO libspdk_event_ublk.so.2.0 00:03:14.586 SO libspdk_event_nbd.so.5.0 00:03:14.586 LIB libspdk_event_scsi.a 00:03:14.586 LIB libspdk_event_nvmf.a 00:03:14.586 SO libspdk_event_scsi.so.5.0 00:03:14.586 SYMLINK libspdk_event_nbd.so 00:03:14.586 SYMLINK libspdk_event_ublk.so 00:03:14.587 SO libspdk_event_nvmf.so.5.0 00:03:14.587 SYMLINK libspdk_event_scsi.so 00:03:14.587 SYMLINK libspdk_event_nvmf.so 00:03:14.587 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.587 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.847 LIB libspdk_event_vhost_scsi.a 00:03:14.847 LIB libspdk_event_iscsi.a 00:03:14.847 SO libspdk_event_vhost_scsi.so.2.0 00:03:14.847 SO libspdk_event_iscsi.so.5.0 00:03:14.847 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.847 SYMLINK libspdk_event_iscsi.so 00:03:15.109 SO libspdk.so.5.0 00:03:15.109 SYMLINK libspdk.so 00:03:15.109 CXX app/trace/trace.o 00:03:15.109 CC app/trace_record/trace_record.o 00:03:15.109 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.109 CC app/nvmf_tgt/nvmf_main.o 00:03:15.109 CC examples/accel/perf/accel_perf.o 00:03:15.109 CC examples/nvme/hello_world/hello_world.o 00:03:15.109 CC examples/ioat/perf/perf.o 00:03:15.109 CC examples/bdev/hello_world/hello_bdev.o 00:03:15.109 CC examples/blob/hello_world/hello_blob.o 00:03:15.370 CC test/accel/dif/dif.o 00:03:15.370 LINK nvmf_tgt 00:03:15.370 LINK iscsi_tgt 00:03:15.370 LINK hello_world 00:03:15.370 LINK spdk_trace_record 00:03:15.370 LINK ioat_perf 00:03:15.370 LINK hello_blob 00:03:15.370 LINK hello_bdev 00:03:15.631 LINK spdk_trace 00:03:15.631 CC app/spdk_lspci/spdk_lspci.o 00:03:15.631 CC examples/ioat/verify/verify.o 00:03:15.631 CC app/spdk_tgt/spdk_tgt.o 00:03:15.631 CC examples/nvme/reconnect/reconnect.o 00:03:15.631 LINK dif 00:03:15.631 CC examples/blob/cli/blobcli.o 00:03:15.631 LINK accel_perf 00:03:15.631 LINK spdk_lspci 00:03:15.893 CC test/app/bdev_svc/bdev_svc.o 00:03:15.893 CC examples/bdev/bdevperf/bdevperf.o 00:03:15.893 CC test/bdev/bdevio/bdevio.o 00:03:15.893 LINK spdk_tgt 00:03:15.893 LINK verify 00:03:15.893 LINK bdev_svc 00:03:15.893 CC test/blobfs/mkfs/mkfs.o 00:03:15.893 CC examples/sock/hello_world/hello_sock.o 00:03:15.893 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.893 LINK reconnect 00:03:16.154 TEST_HEADER include/spdk/accel.h 00:03:16.154 TEST_HEADER include/spdk/accel_module.h 00:03:16.154 TEST_HEADER include/spdk/assert.h 00:03:16.154 TEST_HEADER include/spdk/barrier.h 00:03:16.154 TEST_HEADER include/spdk/base64.h 00:03:16.154 TEST_HEADER include/spdk/bdev.h 00:03:16.154 TEST_HEADER include/spdk/bdev_module.h 00:03:16.154 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.154 TEST_HEADER include/spdk/bit_array.h 00:03:16.154 TEST_HEADER include/spdk/bit_pool.h 00:03:16.154 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.154 CC app/spdk_nvme_perf/perf.o 00:03:16.154 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.154 TEST_HEADER include/spdk/blobfs.h 00:03:16.154 TEST_HEADER include/spdk/blob.h 00:03:16.154 TEST_HEADER include/spdk/conf.h 00:03:16.154 TEST_HEADER include/spdk/config.h 00:03:16.154 TEST_HEADER include/spdk/cpuset.h 00:03:16.154 TEST_HEADER include/spdk/crc16.h 00:03:16.154 TEST_HEADER include/spdk/crc32.h 00:03:16.154 TEST_HEADER include/spdk/crc64.h 00:03:16.154 TEST_HEADER include/spdk/dif.h 00:03:16.154 TEST_HEADER include/spdk/dma.h 00:03:16.154 TEST_HEADER include/spdk/endian.h 00:03:16.154 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.154 TEST_HEADER include/spdk/env.h 00:03:16.154 TEST_HEADER include/spdk/event.h 00:03:16.154 TEST_HEADER include/spdk/fd_group.h 00:03:16.154 TEST_HEADER include/spdk/fd.h 00:03:16.154 TEST_HEADER include/spdk/file.h 00:03:16.154 TEST_HEADER include/spdk/ftl.h 00:03:16.154 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.154 TEST_HEADER include/spdk/hexlify.h 00:03:16.154 TEST_HEADER include/spdk/histogram_data.h 00:03:16.154 LINK lsvmd 00:03:16.154 TEST_HEADER include/spdk/idxd.h 00:03:16.154 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.154 TEST_HEADER include/spdk/init.h 00:03:16.154 TEST_HEADER include/spdk/ioat.h 00:03:16.154 LINK mkfs 00:03:16.154 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.154 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.154 TEST_HEADER include/spdk/json.h 00:03:16.154 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.154 TEST_HEADER include/spdk/likely.h 00:03:16.154 TEST_HEADER include/spdk/log.h 00:03:16.154 TEST_HEADER include/spdk/lvol.h 00:03:16.154 TEST_HEADER include/spdk/memory.h 00:03:16.154 TEST_HEADER include/spdk/mmio.h 00:03:16.154 TEST_HEADER include/spdk/nbd.h 00:03:16.154 LINK blobcli 00:03:16.154 TEST_HEADER include/spdk/notify.h 00:03:16.154 TEST_HEADER include/spdk/nvme.h 00:03:16.154 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.154 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.154 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.154 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.154 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.154 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.154 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.154 TEST_HEADER include/spdk/nvmf.h 00:03:16.154 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.154 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.154 TEST_HEADER include/spdk/opal.h 00:03:16.154 TEST_HEADER include/spdk/opal_spec.h 00:03:16.154 TEST_HEADER include/spdk/pci_ids.h 00:03:16.154 TEST_HEADER include/spdk/pipe.h 00:03:16.154 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.154 TEST_HEADER include/spdk/queue.h 00:03:16.154 TEST_HEADER include/spdk/reduce.h 00:03:16.154 TEST_HEADER include/spdk/rpc.h 00:03:16.154 LINK bdevio 00:03:16.154 TEST_HEADER include/spdk/scheduler.h 00:03:16.154 TEST_HEADER include/spdk/scsi.h 00:03:16.154 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.154 TEST_HEADER include/spdk/sock.h 00:03:16.154 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.154 TEST_HEADER include/spdk/stdinc.h 00:03:16.154 TEST_HEADER include/spdk/string.h 00:03:16.154 TEST_HEADER include/spdk/thread.h 00:03:16.154 TEST_HEADER include/spdk/trace.h 00:03:16.154 TEST_HEADER include/spdk/trace_parser.h 00:03:16.154 LINK hello_sock 00:03:16.154 TEST_HEADER include/spdk/tree.h 00:03:16.154 TEST_HEADER include/spdk/ublk.h 00:03:16.154 TEST_HEADER include/spdk/util.h 00:03:16.154 TEST_HEADER include/spdk/uuid.h 00:03:16.154 TEST_HEADER include/spdk/version.h 00:03:16.154 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.154 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.154 TEST_HEADER include/spdk/vhost.h 00:03:16.415 TEST_HEADER include/spdk/vmd.h 00:03:16.415 TEST_HEADER include/spdk/xor.h 00:03:16.415 TEST_HEADER include/spdk/zipf.h 00:03:16.415 CXX test/cpp_headers/accel.o 00:03:16.415 CC examples/vmd/led/led.o 00:03:16.415 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.415 CXX test/cpp_headers/accel_module.o 00:03:16.415 CC test/app/histogram_perf/histogram_perf.o 00:03:16.415 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.415 CC test/app/jsoncat/jsoncat.o 00:03:16.415 LINK led 00:03:16.676 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.676 LINK histogram_perf 00:03:16.676 LINK bdevperf 00:03:16.676 CXX test/cpp_headers/assert.o 00:03:16.676 LINK jsoncat 00:03:16.676 LINK nvme_fuzz 00:03:16.676 CC test/app/stub/stub.o 00:03:16.676 CXX test/cpp_headers/barrier.o 00:03:16.676 LINK nvme_manage 00:03:16.676 CXX test/cpp_headers/base64.o 00:03:16.676 CC app/spdk_nvme_identify/identify.o 00:03:16.937 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.937 CXX test/cpp_headers/bdev.o 00:03:16.937 CC examples/nvme/arbitration/arbitration.o 00:03:16.937 LINK stub 00:03:16.937 LINK spdk_nvme_perf 00:03:16.937 LINK spdk_nvme_discover 00:03:16.937 CC examples/util/zipf/zipf.o 00:03:16.937 LINK vhost_fuzz 00:03:16.937 CC examples/nvmf/nvmf/nvmf.o 00:03:16.937 CXX test/cpp_headers/bdev_module.o 00:03:17.197 CXX test/cpp_headers/bdev_zone.o 00:03:17.197 LINK zipf 00:03:17.197 CXX test/cpp_headers/bit_array.o 00:03:17.197 CXX test/cpp_headers/bit_pool.o 00:03:17.197 LINK arbitration 00:03:17.197 CC examples/nvme/hotplug/hotplug.o 00:03:17.197 CC examples/thread/thread/thread_ex.o 00:03:17.197 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.197 CXX test/cpp_headers/blob_bdev.o 00:03:17.197 CC examples/idxd/perf/perf.o 00:03:17.197 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.197 LINK nvmf 00:03:17.458 CC examples/nvme/abort/abort.o 00:03:17.458 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.458 CXX test/cpp_headers/blobfs.o 00:03:17.458 LINK cmb_copy 00:03:17.458 LINK interrupt_tgt 00:03:17.458 LINK hotplug 00:03:17.458 LINK thread 00:03:17.458 LINK spdk_nvme_identify 00:03:17.719 CXX test/cpp_headers/blob.o 00:03:17.719 CXX test/cpp_headers/conf.o 00:03:17.719 LINK idxd_perf 00:03:17.719 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.719 CC app/spdk_top/spdk_top.o 00:03:17.719 CC test/dma/test_dma/test_dma.o 00:03:17.719 LINK abort 00:03:17.719 CXX test/cpp_headers/config.o 00:03:17.719 CC test/event/event_perf/event_perf.o 00:03:17.719 CC test/env/vtophys/vtophys.o 00:03:17.719 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.719 CXX test/cpp_headers/cpuset.o 00:03:17.719 LINK pmr_persistence 00:03:17.719 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.981 CC test/env/memory/memory_ut.o 00:03:17.981 LINK vtophys 00:03:17.981 LINK event_perf 00:03:17.981 CXX test/cpp_headers/crc16.o 00:03:17.981 LINK env_dpdk_post_init 00:03:17.981 CC test/env/pci/pci_ut.o 00:03:17.981 CXX test/cpp_headers/crc32.o 00:03:17.981 CXX test/cpp_headers/crc64.o 00:03:17.981 CC test/event/reactor/reactor.o 00:03:17.981 LINK test_dma 00:03:17.981 LINK iscsi_fuzz 00:03:18.241 CXX test/cpp_headers/dif.o 00:03:18.241 LINK mem_callbacks 00:03:18.241 CXX test/cpp_headers/dma.o 00:03:18.241 LINK reactor 00:03:18.241 CC test/lvol/esnap/esnap.o 00:03:18.241 CXX test/cpp_headers/endian.o 00:03:18.241 CXX test/cpp_headers/env_dpdk.o 00:03:18.241 CC app/vhost/vhost.o 00:03:18.241 CC test/event/reactor_perf/reactor_perf.o 00:03:18.241 CXX test/cpp_headers/env.o 00:03:18.241 LINK pci_ut 00:03:18.241 CC app/spdk_dd/spdk_dd.o 00:03:18.241 CC app/fio/nvme/fio_plugin.o 00:03:18.502 CC test/nvme/aer/aer.o 00:03:18.502 CXX test/cpp_headers/event.o 00:03:18.502 LINK reactor_perf 00:03:18.502 LINK vhost 00:03:18.502 LINK spdk_top 00:03:18.502 CXX test/cpp_headers/fd_group.o 00:03:18.502 CC test/nvme/reset/reset.o 00:03:18.502 LINK aer 00:03:18.502 CC test/event/app_repeat/app_repeat.o 00:03:18.762 CXX test/cpp_headers/fd.o 00:03:18.762 LINK memory_ut 00:03:18.762 LINK spdk_dd 00:03:18.762 CC test/event/scheduler/scheduler.o 00:03:18.762 CC app/fio/bdev/fio_plugin.o 00:03:18.762 LINK app_repeat 00:03:18.762 CXX test/cpp_headers/file.o 00:03:18.762 CC test/nvme/sgl/sgl.o 00:03:18.762 CXX test/cpp_headers/ftl.o 00:03:18.762 LINK reset 00:03:18.762 CC test/rpc_client/rpc_client_test.o 00:03:18.762 CXX test/cpp_headers/gpt_spec.o 00:03:18.762 CC test/nvme/e2edp/nvme_dp.o 00:03:18.762 LINK scheduler 00:03:19.023 LINK spdk_nvme 00:03:19.023 CXX test/cpp_headers/hexlify.o 00:03:19.023 CC test/nvme/overhead/overhead.o 00:03:19.023 CC test/nvme/err_injection/err_injection.o 00:03:19.023 LINK sgl 00:03:19.023 LINK rpc_client_test 00:03:19.023 CXX test/cpp_headers/histogram_data.o 00:03:19.023 CXX test/cpp_headers/idxd.o 00:03:19.023 LINK nvme_dp 00:03:19.023 CC test/thread/poller_perf/poller_perf.o 00:03:19.023 CXX test/cpp_headers/idxd_spec.o 00:03:19.023 CXX test/cpp_headers/init.o 00:03:19.285 LINK spdk_bdev 00:03:19.285 LINK err_injection 00:03:19.285 CC test/nvme/startup/startup.o 00:03:19.285 LINK poller_perf 00:03:19.285 CXX test/cpp_headers/ioat.o 00:03:19.285 CC test/nvme/reserve/reserve.o 00:03:19.285 LINK overhead 00:03:19.285 CXX test/cpp_headers/ioat_spec.o 00:03:19.285 CXX test/cpp_headers/iscsi_spec.o 00:03:19.285 CXX test/cpp_headers/json.o 00:03:19.285 CXX test/cpp_headers/jsonrpc.o 00:03:19.285 CXX test/cpp_headers/likely.o 00:03:19.285 LINK startup 00:03:19.285 CC test/nvme/simple_copy/simple_copy.o 00:03:19.285 CXX test/cpp_headers/log.o 00:03:19.285 CXX test/cpp_headers/lvol.o 00:03:19.285 CXX test/cpp_headers/memory.o 00:03:19.285 LINK reserve 00:03:19.546 CC test/nvme/connect_stress/connect_stress.o 00:03:19.546 CC test/nvme/boot_partition/boot_partition.o 00:03:19.546 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.546 CC test/nvme/compliance/nvme_compliance.o 00:03:19.547 CXX test/cpp_headers/mmio.o 00:03:19.547 LINK simple_copy 00:03:19.547 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.547 CC test/nvme/fdp/fdp.o 00:03:19.547 CC test/nvme/cuse/cuse.o 00:03:19.547 LINK connect_stress 00:03:19.547 LINK boot_partition 00:03:19.547 CXX test/cpp_headers/nbd.o 00:03:19.547 CXX test/cpp_headers/notify.o 00:03:19.547 CXX test/cpp_headers/nvme.o 00:03:19.808 LINK fused_ordering 00:03:19.808 LINK doorbell_aers 00:03:19.808 CXX test/cpp_headers/nvme_intel.o 00:03:19.808 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.808 LINK fdp 00:03:19.808 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.808 CXX test/cpp_headers/nvme_spec.o 00:03:19.808 CXX test/cpp_headers/nvme_zns.o 00:03:19.808 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.808 LINK nvme_compliance 00:03:19.808 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.808 CXX test/cpp_headers/nvmf.o 00:03:19.808 CXX test/cpp_headers/nvmf_spec.o 00:03:19.808 CXX test/cpp_headers/nvmf_transport.o 00:03:19.808 CXX test/cpp_headers/opal.o 00:03:20.069 CXX test/cpp_headers/opal_spec.o 00:03:20.069 CXX test/cpp_headers/pci_ids.o 00:03:20.069 CXX test/cpp_headers/pipe.o 00:03:20.069 CXX test/cpp_headers/queue.o 00:03:20.069 CXX test/cpp_headers/reduce.o 00:03:20.069 CXX test/cpp_headers/rpc.o 00:03:20.069 CXX test/cpp_headers/scheduler.o 00:03:20.069 CXX test/cpp_headers/scsi.o 00:03:20.069 CXX test/cpp_headers/scsi_spec.o 00:03:20.069 CXX test/cpp_headers/sock.o 00:03:20.069 CXX test/cpp_headers/stdinc.o 00:03:20.069 CXX test/cpp_headers/string.o 00:03:20.069 CXX test/cpp_headers/thread.o 00:03:20.069 CXX test/cpp_headers/trace.o 00:03:20.069 CXX test/cpp_headers/trace_parser.o 00:03:20.069 CXX test/cpp_headers/tree.o 00:03:20.069 CXX test/cpp_headers/ublk.o 00:03:20.069 CXX test/cpp_headers/util.o 00:03:20.069 CXX test/cpp_headers/uuid.o 00:03:20.069 CXX test/cpp_headers/version.o 00:03:20.069 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.330 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.330 CXX test/cpp_headers/vhost.o 00:03:20.330 CXX test/cpp_headers/vmd.o 00:03:20.330 CXX test/cpp_headers/xor.o 00:03:20.330 CXX test/cpp_headers/zipf.o 00:03:20.330 LINK cuse 00:03:22.245 LINK esnap 00:03:22.245 ************************************ 00:03:22.245 END TEST make 00:03:22.245 ************************************ 00:03:22.245 00:03:22.245 real 0m49.562s 00:03:22.245 user 4m53.373s 00:03:22.245 sys 0m59.180s 00:03:22.245 19:56:29 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:22.245 19:56:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.506 19:56:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:22.506 19:56:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:22.506 19:56:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:22.506 19:56:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:22.506 19:56:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:22.506 19:56:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:22.506 19:56:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:22.506 19:56:30 -- scripts/common.sh@335 -- # IFS=.-: 00:03:22.506 19:56:30 -- scripts/common.sh@335 -- # read -ra ver1 00:03:22.506 19:56:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.506 19:56:30 -- scripts/common.sh@336 -- # read -ra ver2 00:03:22.506 19:56:30 -- scripts/common.sh@337 -- # local 'op=<' 00:03:22.506 19:56:30 -- scripts/common.sh@339 -- # ver1_l=2 00:03:22.506 19:56:30 -- scripts/common.sh@340 -- # ver2_l=1 00:03:22.506 19:56:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:22.506 19:56:30 -- scripts/common.sh@343 -- # case "$op" in 00:03:22.506 19:56:30 -- scripts/common.sh@344 -- # : 1 00:03:22.506 19:56:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:22.506 19:56:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.506 19:56:30 -- scripts/common.sh@364 -- # decimal 1 00:03:22.506 19:56:30 -- scripts/common.sh@352 -- # local d=1 00:03:22.506 19:56:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.506 19:56:30 -- scripts/common.sh@354 -- # echo 1 00:03:22.506 19:56:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:22.506 19:56:30 -- scripts/common.sh@365 -- # decimal 2 00:03:22.506 19:56:30 -- scripts/common.sh@352 -- # local d=2 00:03:22.506 19:56:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.506 19:56:30 -- scripts/common.sh@354 -- # echo 2 00:03:22.506 19:56:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:22.506 19:56:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:22.506 19:56:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:22.506 19:56:30 -- scripts/common.sh@367 -- # return 0 00:03:22.506 19:56:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.507 19:56:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:22.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.507 --rc genhtml_branch_coverage=1 00:03:22.507 --rc genhtml_function_coverage=1 00:03:22.507 --rc genhtml_legend=1 00:03:22.507 --rc geninfo_all_blocks=1 00:03:22.507 --rc geninfo_unexecuted_blocks=1 00:03:22.507 00:03:22.507 ' 00:03:22.507 19:56:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:22.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.507 --rc genhtml_branch_coverage=1 00:03:22.507 --rc genhtml_function_coverage=1 00:03:22.507 --rc genhtml_legend=1 00:03:22.507 --rc geninfo_all_blocks=1 00:03:22.507 --rc geninfo_unexecuted_blocks=1 00:03:22.507 00:03:22.507 ' 00:03:22.507 19:56:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:22.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.507 --rc genhtml_branch_coverage=1 00:03:22.507 --rc genhtml_function_coverage=1 00:03:22.507 --rc genhtml_legend=1 00:03:22.507 --rc geninfo_all_blocks=1 00:03:22.507 --rc geninfo_unexecuted_blocks=1 00:03:22.507 00:03:22.507 ' 00:03:22.507 19:56:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:22.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.507 --rc genhtml_branch_coverage=1 00:03:22.507 --rc genhtml_function_coverage=1 00:03:22.507 --rc genhtml_legend=1 00:03:22.507 --rc geninfo_all_blocks=1 00:03:22.507 --rc geninfo_unexecuted_blocks=1 00:03:22.507 00:03:22.507 ' 00:03:22.507 19:56:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.507 19:56:30 -- nvmf/common.sh@7 -- # uname -s 00:03:22.507 19:56:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.507 19:56:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.507 19:56:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.507 19:56:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.507 19:56:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.507 19:56:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.507 19:56:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.507 19:56:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.507 19:56:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.507 19:56:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.507 19:56:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab2fd980-8183-46c1-a9af-566bb5c57102 00:03:22.507 19:56:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ab2fd980-8183-46c1-a9af-566bb5c57102 00:03:22.507 19:56:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.507 19:56:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.507 19:56:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:22.507 19:56:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.507 19:56:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.507 19:56:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.507 19:56:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.507 19:56:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.507 19:56:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.507 19:56:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.507 19:56:30 -- paths/export.sh@5 -- # export PATH 00:03:22.507 19:56:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.507 19:56:30 -- nvmf/common.sh@46 -- # : 0 00:03:22.507 19:56:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:22.507 19:56:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:22.507 19:56:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:22.507 19:56:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.507 19:56:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.507 19:56:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:22.507 19:56:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:22.507 19:56:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:22.507 19:56:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.507 19:56:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.507 19:56:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.507 19:56:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.507 19:56:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.507 19:56:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.507 19:56:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.507 19:56:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.507 19:56:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.507 19:56:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.507 19:56:30 -- spdk/autotest.sh@48 -- # udevadm_pid=48162 00:03:22.507 19:56:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.507 19:56:30 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:22.507 19:56:30 -- spdk/autotest.sh@54 -- # echo 48186 00:03:22.507 19:56:30 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:22.768 19:56:30 -- spdk/autotest.sh@56 -- # echo 48191 00:03:22.768 19:56:30 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:22.768 19:56:30 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:22.768 19:56:30 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.768 19:56:30 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:22.768 19:56:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.768 19:56:30 -- common/autotest_common.sh@10 -- # set +x 00:03:22.768 19:56:30 -- spdk/autotest.sh@70 -- # create_test_list 00:03:22.768 19:56:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:22.768 19:56:30 -- common/autotest_common.sh@10 -- # set +x 00:03:22.768 19:56:30 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.768 19:56:30 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.768 19:56:30 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.768 19:56:30 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.768 19:56:30 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.768 19:56:30 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:22.768 19:56:30 -- common/autotest_common.sh@1450 -- # uname 00:03:22.768 19:56:30 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:22.768 19:56:30 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:22.768 19:56:30 -- common/autotest_common.sh@1470 -- # uname 00:03:22.768 19:56:30 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:22.768 19:56:30 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:22.768 19:56:30 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.768 lcov: LCOV version 1.15 00:03:22.768 19:56:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:29.368 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:29.368 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:29.368 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:29.368 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:29.368 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:29.368 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:51.340 19:56:56 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:51.340 19:56:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.340 19:56:56 -- common/autotest_common.sh@10 -- # set +x 00:03:51.340 19:56:56 -- spdk/autotest.sh@89 -- # rm -f 00:03:51.340 19:56:56 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.340 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:51.340 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:51.340 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:51.340 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:51.340 19:56:56 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:51.340 19:56:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:51.340 19:56:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:51.340 19:56:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n2 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n2 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.340 19:56:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n3 00:03:51.340 19:56:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n3 00:03:51.340 19:56:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:51.340 19:56:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.340 19:56:56 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:51.340 19:56:56 -- spdk/autotest.sh@108 -- # grep -v p 00:03:51.340 19:56:56 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme3n2 /dev/nvme3n3 00:03:51.340 19:56:57 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:51.340 19:56:57 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:51.340 19:56:57 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:51.340 19:56:57 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:51.340 19:56:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.340 No valid GPT data, bailing 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # pt= 00:03:51.340 19:56:57 -- scripts/common.sh@394 -- # return 1 00:03:51.340 19:56:57 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.340 1+0 records in 00:03:51.340 1+0 records out 00:03:51.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108111 s, 97.0 MB/s 00:03:51.340 19:56:57 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:51.340 19:56:57 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:51.340 19:56:57 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:51.340 19:56:57 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:51.340 19:56:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:51.340 No valid GPT data, bailing 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # pt= 00:03:51.340 19:56:57 -- scripts/common.sh@394 -- # return 1 00:03:51.340 19:56:57 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:51.340 1+0 records in 00:03:51.340 1+0 records out 00:03:51.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462573 s, 227 MB/s 00:03:51.340 19:56:57 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:51.340 19:56:57 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:51.340 19:56:57 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:51.340 19:56:57 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:51.340 19:56:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:51.340 No valid GPT data, bailing 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # pt= 00:03:51.340 19:56:57 -- scripts/common.sh@394 -- # return 1 00:03:51.340 19:56:57 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:51.340 1+0 records in 00:03:51.340 1+0 records out 00:03:51.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00330294 s, 317 MB/s 00:03:51.340 19:56:57 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:51.340 19:56:57 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:51.340 19:56:57 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:51.340 19:56:57 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:51.340 19:56:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:51.340 No valid GPT data, bailing 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # pt= 00:03:51.340 19:56:57 -- scripts/common.sh@394 -- # return 1 00:03:51.340 19:56:57 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:51.340 1+0 records in 00:03:51.340 1+0 records out 00:03:51.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00297322 s, 353 MB/s 00:03:51.340 19:56:57 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:51.340 19:56:57 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:51.340 19:56:57 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n2 00:03:51.340 19:56:57 -- scripts/common.sh@380 -- # local block=/dev/nvme3n2 pt 00:03:51.340 19:56:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:03:51.340 No valid GPT data, bailing 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # pt= 00:03:51.340 19:56:57 -- scripts/common.sh@394 -- # return 1 00:03:51.340 19:56:57 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:03:51.340 1+0 records in 00:03:51.340 1+0 records out 00:03:51.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485646 s, 216 MB/s 00:03:51.340 19:56:57 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:51.340 19:56:57 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:51.340 19:56:57 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n3 00:03:51.340 19:56:57 -- scripts/common.sh@380 -- # local block=/dev/nvme3n3 pt 00:03:51.340 19:56:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:03:51.340 No valid GPT data, bailing 00:03:51.340 19:56:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:03:51.341 19:56:57 -- scripts/common.sh@393 -- # pt= 00:03:51.341 19:56:57 -- scripts/common.sh@394 -- # return 1 00:03:51.341 19:56:57 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:03:51.341 1+0 records in 00:03:51.341 1+0 records out 00:03:51.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386316 s, 271 MB/s 00:03:51.341 19:56:57 -- spdk/autotest.sh@116 -- # sync 00:03:51.341 19:56:57 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.341 19:56:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.341 19:56:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.913 19:56:59 -- spdk/autotest.sh@122 -- # uname -s 00:03:51.913 19:56:59 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:51.913 19:56:59 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:51.913 19:56:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.913 19:56:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.913 19:56:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.913 ************************************ 00:03:51.913 START TEST setup.sh 00:03:51.913 ************************************ 00:03:51.913 19:56:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:51.913 * Looking for test storage... 00:03:51.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.913 19:56:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.913 19:56:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.913 19:56:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.913 19:56:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.913 19:56:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.913 19:56:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.913 19:56:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.913 19:56:59 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.913 19:56:59 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.913 19:56:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.913 19:56:59 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.913 19:56:59 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.913 19:56:59 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.913 19:56:59 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.913 19:56:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.913 19:56:59 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.913 19:56:59 -- scripts/common.sh@344 -- # : 1 00:03:51.913 19:56:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.913 19:56:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.913 19:56:59 -- scripts/common.sh@364 -- # decimal 1 00:03:51.913 19:56:59 -- scripts/common.sh@352 -- # local d=1 00:03:51.913 19:56:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.913 19:56:59 -- scripts/common.sh@354 -- # echo 1 00:03:51.913 19:56:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.913 19:56:59 -- scripts/common.sh@365 -- # decimal 2 00:03:51.913 19:56:59 -- scripts/common.sh@352 -- # local d=2 00:03:51.913 19:56:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.913 19:56:59 -- scripts/common.sh@354 -- # echo 2 00:03:51.913 19:56:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.913 19:56:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.913 19:56:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.913 19:56:59 -- scripts/common.sh@367 -- # return 0 00:03:51.913 19:56:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.913 19:56:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.913 --rc genhtml_branch_coverage=1 00:03:51.913 --rc genhtml_function_coverage=1 00:03:51.913 --rc genhtml_legend=1 00:03:51.913 --rc geninfo_all_blocks=1 00:03:51.913 --rc geninfo_unexecuted_blocks=1 00:03:51.913 00:03:51.913 ' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- setup/test-setup.sh@10 -- # uname -s 00:03:51.914 19:56:59 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.914 19:56:59 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:51.914 19:56:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.914 19:56:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.914 ************************************ 00:03:51.914 START TEST acl 00:03:51.914 ************************************ 00:03:51.914 19:56:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:51.914 * Looking for test storage... 00:03:51.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.914 19:56:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.914 19:56:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.914 19:56:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.914 19:56:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.914 19:56:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.914 19:56:59 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.914 19:56:59 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.914 19:56:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.914 19:56:59 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.914 19:56:59 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.914 19:56:59 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.914 19:56:59 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.914 19:56:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.914 19:56:59 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.914 19:56:59 -- scripts/common.sh@344 -- # : 1 00:03:51.914 19:56:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.914 19:56:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.914 19:56:59 -- scripts/common.sh@364 -- # decimal 1 00:03:51.914 19:56:59 -- scripts/common.sh@352 -- # local d=1 00:03:51.914 19:56:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.914 19:56:59 -- scripts/common.sh@354 -- # echo 1 00:03:51.914 19:56:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.914 19:56:59 -- scripts/common.sh@365 -- # decimal 2 00:03:51.914 19:56:59 -- scripts/common.sh@352 -- # local d=2 00:03:51.914 19:56:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.914 19:56:59 -- scripts/common.sh@354 -- # echo 2 00:03:51.914 19:56:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.914 19:56:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.914 19:56:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.914 19:56:59 -- scripts/common.sh@367 -- # return 0 00:03:51.914 19:56:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.914 --rc genhtml_branch_coverage=1 00:03:51.914 --rc genhtml_function_coverage=1 00:03:51.914 --rc genhtml_legend=1 00:03:51.914 --rc geninfo_all_blocks=1 00:03:51.914 --rc geninfo_unexecuted_blocks=1 00:03:51.914 00:03:51.914 ' 00:03:51.914 19:56:59 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.914 19:56:59 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:51.914 19:56:59 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:51.914 19:56:59 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n2 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme3n2 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.914 19:56:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n3 00:03:51.914 19:56:59 -- common/autotest_common.sh@1657 -- # local device=nvme3n3 00:03:51.914 19:56:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:51.914 19:56:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.914 19:56:59 -- setup/acl.sh@12 -- # devs=() 00:03:51.914 19:56:59 -- setup/acl.sh@12 -- # declare -a devs 00:03:51.914 19:56:59 -- setup/acl.sh@13 -- # drivers=() 00:03:51.914 19:56:59 -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.914 19:56:59 -- setup/acl.sh@51 -- # setup reset 00:03:51.914 19:56:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.914 19:56:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.859 19:57:00 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:52.859 19:57:00 -- setup/acl.sh@16 -- # local dev driver 00:03:52.859 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.859 19:57:00 -- setup/acl.sh@15 -- # setup output status 00:03:52.859 19:57:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.859 19:57:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:53.119 Hugepages 00:03:53.119 node hugesize free / total 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # continue 00:03:53.119 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.119 00:03:53.119 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # continue 00:03:53.119 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:53.119 19:57:00 -- setup/acl.sh@20 -- # continue 00:03:53.119 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:53.119 19:57:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:53.119 19:57:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:53.119 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.119 19:57:00 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:53.119 19:57:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.119 19:57:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:53.119 19:57:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:53.119 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.380 19:57:00 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:53.380 19:57:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:53.380 19:57:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:53.380 19:57:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:53.380 19:57:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:53.380 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.380 19:57:00 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:53.380 19:57:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:53.380 19:57:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:53.380 19:57:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:53.380 19:57:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:53.380 19:57:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.380 19:57:00 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:53.380 19:57:00 -- setup/acl.sh@54 -- # run_test denied denied 00:03:53.380 19:57:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.380 19:57:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.380 19:57:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.380 ************************************ 00:03:53.380 START TEST denied 00:03:53.380 ************************************ 00:03:53.380 19:57:00 -- common/autotest_common.sh@1114 -- # denied 00:03:53.380 19:57:00 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:53.380 19:57:00 -- setup/acl.sh@38 -- # setup output config 00:03:53.380 19:57:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.380 19:57:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.380 19:57:00 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:54.320 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:54.320 19:57:01 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:54.320 19:57:01 -- setup/acl.sh@28 -- # local dev driver 00:03:54.320 19:57:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.320 19:57:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:54.320 19:57:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:54.320 19:57:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.320 19:57:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.320 19:57:01 -- setup/acl.sh@41 -- # setup reset 00:03:54.320 19:57:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.320 19:57:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.911 00:04:00.911 real 0m6.717s 00:04:00.911 user 0m0.631s 00:04:00.911 sys 0m1.068s 00:04:00.911 19:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.911 19:57:07 -- common/autotest_common.sh@10 -- # set +x 00:04:00.911 ************************************ 00:04:00.911 END TEST denied 00:04:00.911 ************************************ 00:04:00.911 19:57:07 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.911 19:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.911 19:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.911 19:57:07 -- common/autotest_common.sh@10 -- # set +x 00:04:00.911 ************************************ 00:04:00.911 START TEST allowed 00:04:00.911 ************************************ 00:04:00.911 19:57:07 -- common/autotest_common.sh@1114 -- # allowed 00:04:00.911 19:57:07 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:00.911 19:57:07 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:00.911 19:57:07 -- setup/acl.sh@45 -- # setup output config 00:04:00.911 19:57:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.911 19:57:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.911 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.911 19:57:08 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:00.911 19:57:08 -- setup/acl.sh@28 -- # local dev driver 00:04:00.911 19:57:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.911 19:57:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:00.911 19:57:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:00.911 19:57:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.911 19:57:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.911 19:57:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.911 19:57:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:00.911 19:57:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:00.911 19:57:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.911 19:57:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.911 19:57:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.911 19:57:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:00.911 19:57:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:00.912 19:57:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.912 19:57:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.912 19:57:08 -- setup/acl.sh@48 -- # setup reset 00:04:00.912 19:57:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.912 19:57:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.855 00:04:01.855 real 0m1.802s 00:04:01.855 user 0m0.761s 00:04:01.855 sys 0m0.972s 00:04:01.855 19:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.855 19:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:01.855 ************************************ 00:04:01.855 END TEST allowed 00:04:01.855 ************************************ 00:04:01.855 00:04:01.855 real 0m10.064s 00:04:01.855 user 0m2.073s 00:04:01.855 sys 0m2.893s 00:04:01.855 19:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.855 ************************************ 00:04:01.855 END TEST acl 00:04:01.855 ************************************ 00:04:01.855 19:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:02.117 19:57:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.117 19:57:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.117 19:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.118 19:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:02.118 ************************************ 00:04:02.118 START TEST hugepages 00:04:02.118 ************************************ 00:04:02.118 19:57:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.118 * Looking for test storage... 00:04:02.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.118 19:57:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.118 19:57:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.118 19:57:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.118 19:57:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.118 19:57:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.118 19:57:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.118 19:57:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.118 19:57:09 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.118 19:57:09 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.118 19:57:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.118 19:57:09 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.118 19:57:09 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.118 19:57:09 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.118 19:57:09 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.118 19:57:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.118 19:57:09 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.118 19:57:09 -- scripts/common.sh@344 -- # : 1 00:04:02.118 19:57:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.118 19:57:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.118 19:57:09 -- scripts/common.sh@364 -- # decimal 1 00:04:02.118 19:57:09 -- scripts/common.sh@352 -- # local d=1 00:04:02.118 19:57:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.118 19:57:09 -- scripts/common.sh@354 -- # echo 1 00:04:02.118 19:57:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.118 19:57:09 -- scripts/common.sh@365 -- # decimal 2 00:04:02.118 19:57:09 -- scripts/common.sh@352 -- # local d=2 00:04:02.118 19:57:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.118 19:57:09 -- scripts/common.sh@354 -- # echo 2 00:04:02.118 19:57:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.118 19:57:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.118 19:57:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.118 19:57:09 -- scripts/common.sh@367 -- # return 0 00:04:02.118 19:57:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.118 19:57:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.118 --rc genhtml_branch_coverage=1 00:04:02.118 --rc genhtml_function_coverage=1 00:04:02.118 --rc genhtml_legend=1 00:04:02.118 --rc geninfo_all_blocks=1 00:04:02.118 --rc geninfo_unexecuted_blocks=1 00:04:02.118 00:04:02.118 ' 00:04:02.118 19:57:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.118 --rc genhtml_branch_coverage=1 00:04:02.118 --rc genhtml_function_coverage=1 00:04:02.118 --rc genhtml_legend=1 00:04:02.118 --rc geninfo_all_blocks=1 00:04:02.118 --rc geninfo_unexecuted_blocks=1 00:04:02.118 00:04:02.118 ' 00:04:02.118 19:57:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.118 --rc genhtml_branch_coverage=1 00:04:02.118 --rc genhtml_function_coverage=1 00:04:02.118 --rc genhtml_legend=1 00:04:02.118 --rc geninfo_all_blocks=1 00:04:02.118 --rc geninfo_unexecuted_blocks=1 00:04:02.118 00:04:02.118 ' 00:04:02.118 19:57:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.118 --rc genhtml_branch_coverage=1 00:04:02.118 --rc genhtml_function_coverage=1 00:04:02.118 --rc genhtml_legend=1 00:04:02.118 --rc geninfo_all_blocks=1 00:04:02.118 --rc geninfo_unexecuted_blocks=1 00:04:02.118 00:04:02.118 ' 00:04:02.118 19:57:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.118 19:57:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.118 19:57:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.118 19:57:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.118 19:57:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.118 19:57:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.118 19:57:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.118 19:57:09 -- setup/common.sh@18 -- # local node= 00:04:02.118 19:57:09 -- setup/common.sh@19 -- # local var val 00:04:02.118 19:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.118 19:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.118 19:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.118 19:57:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.118 19:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.118 19:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5810980 kB' 'MemAvailable: 7367972 kB' 'Buffers: 3704 kB' 'Cached: 1768844 kB' 'SwapCached: 0 kB' 'Active: 465584 kB' 'Inactive: 1422808 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 117556 kB' 'Mapped: 50788 kB' 'Shmem: 10532 kB' 'KReclaimable: 63804 kB' 'Slab: 161220 kB' 'SReclaimable: 63804 kB' 'SUnreclaim: 97416 kB' 'KernelStack: 6608 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 310044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.118 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # continue 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 19:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 19:57:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.119 19:57:09 -- setup/common.sh@33 -- # echo 2048 00:04:02.119 19:57:09 -- setup/common.sh@33 -- # return 0 00:04:02.119 19:57:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.119 19:57:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.119 19:57:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.119 19:57:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.119 19:57:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.119 19:57:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.119 19:57:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.119 19:57:09 -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.119 19:57:09 -- setup/hugepages.sh@27 -- # local node 00:04:02.119 19:57:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.119 19:57:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.119 19:57:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.119 19:57:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.119 19:57:09 -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.119 19:57:09 -- setup/hugepages.sh@37 -- # local node hp 00:04:02.119 19:57:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.119 19:57:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.119 19:57:09 -- setup/hugepages.sh@41 -- # echo 0 00:04:02.119 19:57:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.119 19:57:09 -- setup/hugepages.sh@41 -- # echo 0 00:04:02.119 19:57:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.119 19:57:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.119 19:57:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.119 19:57:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.120 19:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.120 19:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:02.120 ************************************ 00:04:02.120 START TEST default_setup 00:04:02.120 ************************************ 00:04:02.120 19:57:09 -- common/autotest_common.sh@1114 -- # default_setup 00:04:02.120 19:57:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.120 19:57:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.120 19:57:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.120 19:57:09 -- setup/hugepages.sh@51 -- # shift 00:04:02.120 19:57:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.120 19:57:09 -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.120 19:57:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.120 19:57:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.120 19:57:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.120 19:57:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.120 19:57:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.120 19:57:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.120 19:57:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.120 19:57:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.120 19:57:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.120 19:57:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.120 19:57:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.120 19:57:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.120 19:57:09 -- setup/hugepages.sh@73 -- # return 0 00:04:02.120 19:57:09 -- setup/hugepages.sh@137 -- # setup output 00:04:02.120 19:57:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.120 19:57:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.329 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.329 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.329 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.329 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.329 19:57:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:03.329 19:57:10 -- setup/hugepages.sh@89 -- # local node 00:04:03.329 19:57:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.329 19:57:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.329 19:57:10 -- setup/hugepages.sh@92 -- # local surp 00:04:03.329 19:57:10 -- setup/hugepages.sh@93 -- # local resv 00:04:03.329 19:57:10 -- setup/hugepages.sh@94 -- # local anon 00:04:03.329 19:57:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.329 19:57:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.329 19:57:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.329 19:57:10 -- setup/common.sh@18 -- # local node= 00:04:03.329 19:57:10 -- setup/common.sh@19 -- # local var val 00:04:03.329 19:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.329 19:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.329 19:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.329 19:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.329 19:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.329 19:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.329 19:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929048 kB' 'MemAvailable: 9485848 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468552 kB' 'Inactive: 1422836 kB' 'Active(anon): 129344 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120500 kB' 'Mapped: 50904 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161032 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97668 kB' 'KernelStack: 6656 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.329 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.329 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.330 19:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.330 19:57:10 -- setup/common.sh@33 -- # echo 0 00:04:03.330 19:57:10 -- setup/common.sh@33 -- # return 0 00:04:03.330 19:57:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.330 19:57:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.330 19:57:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.330 19:57:10 -- setup/common.sh@18 -- # local node= 00:04:03.330 19:57:10 -- setup/common.sh@19 -- # local var val 00:04:03.330 19:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.330 19:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.330 19:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.330 19:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.330 19:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.330 19:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.330 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929048 kB' 'MemAvailable: 9485848 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468284 kB' 'Inactive: 1422836 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120152 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161032 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97668 kB' 'KernelStack: 6608 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.331 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.331 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.332 19:57:10 -- setup/common.sh@33 -- # echo 0 00:04:03.332 19:57:10 -- setup/common.sh@33 -- # return 0 00:04:03.332 19:57:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.332 19:57:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.332 19:57:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.332 19:57:10 -- setup/common.sh@18 -- # local node= 00:04:03.332 19:57:10 -- setup/common.sh@19 -- # local var val 00:04:03.332 19:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.332 19:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.332 19:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.332 19:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.332 19:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.332 19:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7928796 kB' 'MemAvailable: 9485596 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468476 kB' 'Inactive: 1422836 kB' 'Active(anon): 129268 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120336 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161032 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97668 kB' 'KernelStack: 6576 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.332 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.332 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.333 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.333 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.333 19:57:10 -- setup/common.sh@33 -- # echo 0 00:04:03.333 19:57:10 -- setup/common.sh@33 -- # return 0 00:04:03.333 nr_hugepages=1024 00:04:03.333 resv_hugepages=0 00:04:03.333 surplus_hugepages=0 00:04:03.333 19:57:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.333 19:57:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.333 19:57:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.333 19:57:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.333 anon_hugepages=0 00:04:03.333 19:57:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.333 19:57:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.333 19:57:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.595 19:57:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.595 19:57:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.595 19:57:10 -- setup/common.sh@18 -- # local node= 00:04:03.595 19:57:10 -- setup/common.sh@19 -- # local var val 00:04:03.596 19:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.596 19:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.596 19:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.596 19:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.596 19:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.596 19:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7928796 kB' 'MemAvailable: 9485596 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468420 kB' 'Inactive: 1422836 kB' 'Active(anon): 129212 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120332 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161032 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97668 kB' 'KernelStack: 6624 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.596 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.597 19:57:10 -- setup/common.sh@33 -- # echo 1024 00:04:03.597 19:57:10 -- setup/common.sh@33 -- # return 0 00:04:03.597 19:57:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.597 19:57:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.597 19:57:10 -- setup/hugepages.sh@27 -- # local node 00:04:03.597 19:57:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.597 19:57:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.597 19:57:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.597 19:57:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.597 19:57:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.597 19:57:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.597 19:57:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.597 19:57:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.597 19:57:10 -- setup/common.sh@18 -- # local node=0 00:04:03.597 19:57:10 -- setup/common.sh@19 -- # local var val 00:04:03.597 19:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.597 19:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.597 19:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.597 19:57:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.597 19:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.597 19:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7928796 kB' 'MemUsed: 4308300 kB' 'SwapCached: 0 kB' 'Active: 468376 kB' 'Inactive: 1422836 kB' 'Active(anon): 129168 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1772536 kB' 'Mapped: 50788 kB' 'AnonPages: 120256 kB' 'Shmem: 10492 kB' 'KernelStack: 6608 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63364 kB' 'Slab: 161028 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 19:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # continue 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 19:57:11 -- setup/common.sh@33 -- # echo 0 00:04:03.598 19:57:11 -- setup/common.sh@33 -- # return 0 00:04:03.598 node0=1024 expecting 1024 00:04:03.598 19:57:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.598 19:57:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.598 19:57:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.598 19:57:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.598 19:57:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.598 19:57:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.598 00:04:03.598 real 0m1.269s 00:04:03.598 user 0m0.489s 00:04:03.598 sys 0m0.629s 00:04:03.598 19:57:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.598 ************************************ 00:04:03.598 END TEST default_setup 00:04:03.598 ************************************ 00:04:03.598 19:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:03.598 19:57:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:03.598 19:57:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.598 19:57:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.598 19:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:03.598 ************************************ 00:04:03.598 START TEST per_node_1G_alloc 00:04:03.598 ************************************ 00:04:03.598 19:57:11 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:03.598 19:57:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:03.598 19:57:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:03.598 19:57:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.598 19:57:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.598 19:57:11 -- setup/hugepages.sh@51 -- # shift 00:04:03.598 19:57:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.598 19:57:11 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.598 19:57:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.598 19:57:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.598 19:57:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.598 19:57:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.598 19:57:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.598 19:57:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.598 19:57:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.598 19:57:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.598 19:57:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.598 19:57:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.598 19:57:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.598 19:57:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.598 19:57:11 -- setup/hugepages.sh@73 -- # return 0 00:04:03.598 19:57:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:03.598 19:57:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:03.598 19:57:11 -- setup/hugepages.sh@146 -- # setup output 00:04:03.598 19:57:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.598 19:57:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.123 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.123 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.123 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.123 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.123 19:57:11 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:04.123 19:57:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:04.123 19:57:11 -- setup/hugepages.sh@89 -- # local node 00:04:04.123 19:57:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.123 19:57:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.123 19:57:11 -- setup/hugepages.sh@92 -- # local surp 00:04:04.123 19:57:11 -- setup/hugepages.sh@93 -- # local resv 00:04:04.123 19:57:11 -- setup/hugepages.sh@94 -- # local anon 00:04:04.123 19:57:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.123 19:57:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.123 19:57:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.123 19:57:11 -- setup/common.sh@18 -- # local node= 00:04:04.123 19:57:11 -- setup/common.sh@19 -- # local var val 00:04:04.123 19:57:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.123 19:57:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.123 19:57:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.123 19:57:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.123 19:57:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.123 19:57:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.123 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.123 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8977996 kB' 'MemAvailable: 10534796 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468544 kB' 'Inactive: 1422836 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 50844 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161124 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97760 kB' 'KernelStack: 6548 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.124 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.124 19:57:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.125 19:57:11 -- setup/common.sh@33 -- # echo 0 00:04:04.125 19:57:11 -- setup/common.sh@33 -- # return 0 00:04:04.125 19:57:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.125 19:57:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.125 19:57:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.125 19:57:11 -- setup/common.sh@18 -- # local node= 00:04:04.125 19:57:11 -- setup/common.sh@19 -- # local var val 00:04:04.125 19:57:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.125 19:57:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.125 19:57:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.125 19:57:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.125 19:57:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.125 19:57:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8977772 kB' 'MemAvailable: 10534572 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468612 kB' 'Inactive: 1422836 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120532 kB' 'Mapped: 50860 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161164 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97800 kB' 'KernelStack: 6596 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.125 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.125 19:57:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.126 19:57:11 -- setup/common.sh@33 -- # echo 0 00:04:04.126 19:57:11 -- setup/common.sh@33 -- # return 0 00:04:04.126 19:57:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.126 19:57:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.126 19:57:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.126 19:57:11 -- setup/common.sh@18 -- # local node= 00:04:04.126 19:57:11 -- setup/common.sh@19 -- # local var val 00:04:04.126 19:57:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.126 19:57:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.126 19:57:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.126 19:57:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.126 19:57:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.126 19:57:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8977772 kB' 'MemAvailable: 10534572 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468412 kB' 'Inactive: 1422836 kB' 'Active(anon): 129204 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120296 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161196 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97832 kB' 'KernelStack: 6624 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.126 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.126 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.127 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.127 19:57:11 -- setup/common.sh@33 -- # echo 0 00:04:04.127 19:57:11 -- setup/common.sh@33 -- # return 0 00:04:04.127 19:57:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.127 19:57:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:04.127 nr_hugepages=512 00:04:04.127 19:57:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.127 resv_hugepages=0 00:04:04.127 surplus_hugepages=0 00:04:04.127 anon_hugepages=0 00:04:04.127 19:57:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.127 19:57:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.127 19:57:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:04.127 19:57:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:04.127 19:57:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.127 19:57:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.127 19:57:11 -- setup/common.sh@18 -- # local node= 00:04:04.127 19:57:11 -- setup/common.sh@19 -- # local var val 00:04:04.127 19:57:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.127 19:57:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.127 19:57:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.127 19:57:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.127 19:57:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.127 19:57:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.127 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8977772 kB' 'MemAvailable: 10534572 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468376 kB' 'Inactive: 1422836 kB' 'Active(anon): 129168 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120296 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161172 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97808 kB' 'KernelStack: 6624 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.128 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.128 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.129 19:57:11 -- setup/common.sh@33 -- # echo 512 00:04:04.129 19:57:11 -- setup/common.sh@33 -- # return 0 00:04:04.129 19:57:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:04.129 19:57:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.129 19:57:11 -- setup/hugepages.sh@27 -- # local node 00:04:04.129 19:57:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.129 19:57:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.129 19:57:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.129 19:57:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.129 19:57:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.129 19:57:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.129 19:57:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.129 19:57:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.129 19:57:11 -- setup/common.sh@18 -- # local node=0 00:04:04.129 19:57:11 -- setup/common.sh@19 -- # local var val 00:04:04.129 19:57:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.129 19:57:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.129 19:57:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.129 19:57:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.129 19:57:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.129 19:57:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8977772 kB' 'MemUsed: 3259324 kB' 'SwapCached: 0 kB' 'Active: 468424 kB' 'Inactive: 1422836 kB' 'Active(anon): 129216 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1772536 kB' 'Mapped: 50788 kB' 'AnonPages: 120352 kB' 'Shmem: 10492 kB' 'KernelStack: 6640 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63364 kB' 'Slab: 161172 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.129 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.129 19:57:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # continue 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.130 19:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.130 19:57:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.130 19:57:11 -- setup/common.sh@33 -- # echo 0 00:04:04.130 19:57:11 -- setup/common.sh@33 -- # return 0 00:04:04.130 19:57:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.130 19:57:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.130 19:57:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.130 node0=512 expecting 512 00:04:04.130 ************************************ 00:04:04.130 END TEST per_node_1G_alloc 00:04:04.130 ************************************ 00:04:04.130 19:57:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.130 19:57:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.130 00:04:04.130 real 0m0.614s 00:04:04.130 user 0m0.252s 00:04:04.130 sys 0m0.374s 00:04:04.130 19:57:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.130 19:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.130 19:57:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:04.130 19:57:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.130 19:57:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.130 19:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.130 ************************************ 00:04:04.130 START TEST even_2G_alloc 00:04:04.130 ************************************ 00:04:04.130 19:57:11 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:04.130 19:57:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:04.130 19:57:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.130 19:57:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.130 19:57:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.130 19:57:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.130 19:57:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.130 19:57:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.130 19:57:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.130 19:57:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.130 19:57:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.130 19:57:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:04.130 19:57:11 -- setup/hugepages.sh@83 -- # : 0 00:04:04.130 19:57:11 -- setup/hugepages.sh@84 -- # : 0 00:04:04.130 19:57:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.130 19:57:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:04.130 19:57:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:04.130 19:57:11 -- setup/hugepages.sh@153 -- # setup output 00:04:04.130 19:57:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.130 19:57:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.706 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.706 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.706 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.707 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.707 19:57:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:04.707 19:57:12 -- setup/hugepages.sh@89 -- # local node 00:04:04.707 19:57:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.707 19:57:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.707 19:57:12 -- setup/hugepages.sh@92 -- # local surp 00:04:04.707 19:57:12 -- setup/hugepages.sh@93 -- # local resv 00:04:04.707 19:57:12 -- setup/hugepages.sh@94 -- # local anon 00:04:04.707 19:57:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.707 19:57:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.707 19:57:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.707 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:04.707 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:04.707 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.707 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.707 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.707 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.707 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.707 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932804 kB' 'MemAvailable: 9489604 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468828 kB' 'Inactive: 1422836 kB' 'Active(anon): 129620 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120472 kB' 'Mapped: 51024 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160920 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97556 kB' 'KernelStack: 6628 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.707 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.707 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.708 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:04.708 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:04.708 19:57:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.708 19:57:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.708 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.708 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:04.708 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:04.708 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.708 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.708 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.708 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.708 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.708 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932804 kB' 'MemAvailable: 9489604 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468864 kB' 'Inactive: 1422836 kB' 'Active(anon): 129656 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120548 kB' 'Mapped: 51024 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160916 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97552 kB' 'KernelStack: 6628 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.708 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.708 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.709 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:04.709 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:04.709 19:57:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.709 19:57:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.709 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.709 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:04.709 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:04.709 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.709 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.709 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.709 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.709 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.709 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7933068 kB' 'MemAvailable: 9489868 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468484 kB' 'Inactive: 1422836 kB' 'Active(anon): 129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120348 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160936 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97572 kB' 'KernelStack: 6576 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 322952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.709 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.709 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.710 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.710 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.711 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:04.711 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:04.711 19:57:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.711 nr_hugepages=1024 00:04:04.711 resv_hugepages=0 00:04:04.711 surplus_hugepages=0 00:04:04.711 anon_hugepages=0 00:04:04.711 19:57:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.711 19:57:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.711 19:57:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.711 19:57:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.711 19:57:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.711 19:57:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.711 19:57:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.711 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.711 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:04.711 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:04.711 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.711 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.711 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.711 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.711 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.711 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7933156 kB' 'MemAvailable: 9489956 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468312 kB' 'Inactive: 1422836 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120256 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160924 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97560 kB' 'KernelStack: 6608 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.711 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.711 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.712 19:57:12 -- setup/common.sh@33 -- # echo 1024 00:04:04.712 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:04.712 19:57:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.712 19:57:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.712 19:57:12 -- setup/hugepages.sh@27 -- # local node 00:04:04.712 19:57:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.712 19:57:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.712 19:57:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.712 19:57:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.712 19:57:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.712 19:57:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.712 19:57:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.712 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.712 19:57:12 -- setup/common.sh@18 -- # local node=0 00:04:04.712 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:04.712 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.712 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.712 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.712 19:57:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.712 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.712 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.712 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7933156 kB' 'MemUsed: 4303940 kB' 'SwapCached: 0 kB' 'Active: 468304 kB' 'Inactive: 1422836 kB' 'Active(anon): 129096 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1772536 kB' 'Mapped: 50788 kB' 'AnonPages: 120248 kB' 'Shmem: 10492 kB' 'KernelStack: 6608 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63364 kB' 'Slab: 160924 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.712 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.712 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # continue 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.713 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.713 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.713 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:04.713 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:04.713 node0=1024 expecting 1024 00:04:04.713 19:57:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.713 19:57:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.713 19:57:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.713 19:57:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.713 19:57:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.713 00:04:04.713 real 0m0.580s 00:04:04.713 user 0m0.233s 00:04:04.713 sys 0m0.359s 00:04:04.713 ************************************ 00:04:04.713 END TEST even_2G_alloc 00:04:04.713 ************************************ 00:04:04.713 19:57:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.713 19:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.713 19:57:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:04.713 19:57:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.713 19:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.713 19:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.713 ************************************ 00:04:04.713 START TEST odd_alloc 00:04:04.713 ************************************ 00:04:04.713 19:57:12 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:04.713 19:57:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:04.713 19:57:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:04.713 19:57:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:04.713 19:57:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.713 19:57:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.713 19:57:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.713 19:57:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:04.713 19:57:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.713 19:57:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.713 19:57:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.713 19:57:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:04.713 19:57:12 -- setup/hugepages.sh@83 -- # : 0 00:04:04.713 19:57:12 -- setup/hugepages.sh@84 -- # : 0 00:04:04.713 19:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.713 19:57:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:04.713 19:57:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:04.713 19:57:12 -- setup/hugepages.sh@160 -- # setup output 00:04:04.713 19:57:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.713 19:57:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.289 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.289 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.289 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.289 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.289 19:57:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:05.289 19:57:12 -- setup/hugepages.sh@89 -- # local node 00:04:05.289 19:57:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.289 19:57:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.289 19:57:12 -- setup/hugepages.sh@92 -- # local surp 00:04:05.289 19:57:12 -- setup/hugepages.sh@93 -- # local resv 00:04:05.289 19:57:12 -- setup/hugepages.sh@94 -- # local anon 00:04:05.289 19:57:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.289 19:57:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.289 19:57:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.289 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:05.289 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:05.289 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.289 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.289 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.289 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.289 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.289 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.289 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.289 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7938216 kB' 'MemAvailable: 9495016 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 469124 kB' 'Inactive: 1422836 kB' 'Active(anon): 129916 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 121092 kB' 'Mapped: 50944 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160876 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97512 kB' 'KernelStack: 6732 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55736 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.290 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.290 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.290 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:05.290 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:05.290 19:57:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.290 19:57:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.290 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.291 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:05.291 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:05.291 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.291 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.291 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.291 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.291 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.291 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7938228 kB' 'MemAvailable: 9495028 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468236 kB' 'Inactive: 1422836 kB' 'Active(anon): 129028 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120160 kB' 'Mapped: 50824 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160892 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97528 kB' 'KernelStack: 6636 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.291 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.291 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.292 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:05.292 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:05.292 19:57:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.292 19:57:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.292 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.292 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:05.292 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:05.292 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.292 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.292 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.292 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.292 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.292 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.292 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7938420 kB' 'MemAvailable: 9495220 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468460 kB' 'Inactive: 1422836 kB' 'Active(anon): 129252 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120336 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160900 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97536 kB' 'KernelStack: 6624 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.292 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.292 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.293 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.293 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:05.293 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:05.293 19:57:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.293 19:57:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:05.293 nr_hugepages=1025 00:04:05.293 19:57:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.293 resv_hugepages=0 00:04:05.293 19:57:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.293 surplus_hugepages=0 00:04:05.293 19:57:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.293 anon_hugepages=0 00:04:05.293 19:57:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.293 19:57:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:05.293 19:57:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.293 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.293 19:57:12 -- setup/common.sh@18 -- # local node= 00:04:05.293 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:05.293 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.293 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.293 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.293 19:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.293 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.293 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.293 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7938420 kB' 'MemAvailable: 9495220 kB' 'Buffers: 3704 kB' 'Cached: 1768832 kB' 'SwapCached: 0 kB' 'Active: 468448 kB' 'Inactive: 1422836 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120320 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160900 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97536 kB' 'KernelStack: 6592 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.294 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.294 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.295 19:57:12 -- setup/common.sh@33 -- # echo 1025 00:04:05.295 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:05.295 19:57:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.295 19:57:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.295 19:57:12 -- setup/hugepages.sh@27 -- # local node 00:04:05.295 19:57:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.295 19:57:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:05.295 19:57:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.295 19:57:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.295 19:57:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.295 19:57:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.295 19:57:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.295 19:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.295 19:57:12 -- setup/common.sh@18 -- # local node=0 00:04:05.295 19:57:12 -- setup/common.sh@19 -- # local var val 00:04:05.295 19:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.295 19:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.295 19:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.295 19:57:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.295 19:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.295 19:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7938172 kB' 'MemUsed: 4298924 kB' 'SwapCached: 0 kB' 'Active: 468468 kB' 'Inactive: 1422836 kB' 'Active(anon): 129260 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1772536 kB' 'Mapped: 50788 kB' 'AnonPages: 120352 kB' 'Shmem: 10492 kB' 'KernelStack: 6624 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63364 kB' 'Slab: 160896 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.295 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.295 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # continue 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.296 19:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.296 19:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.296 19:57:12 -- setup/common.sh@33 -- # echo 0 00:04:05.296 19:57:12 -- setup/common.sh@33 -- # return 0 00:04:05.296 19:57:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.296 node0=1025 expecting 1025 00:04:05.296 19:57:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.296 19:57:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.296 19:57:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.296 19:57:12 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:05.296 19:57:12 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:05.296 00:04:05.296 real 0m0.560s 00:04:05.296 user 0m0.238s 00:04:05.296 sys 0m0.332s 00:04:05.296 19:57:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.296 19:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:05.296 ************************************ 00:04:05.296 END TEST odd_alloc 00:04:05.296 ************************************ 00:04:05.296 19:57:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:05.296 19:57:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.296 19:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.296 19:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:05.557 ************************************ 00:04:05.557 START TEST custom_alloc 00:04:05.557 ************************************ 00:04:05.557 19:57:12 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:05.557 19:57:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:05.557 19:57:12 -- setup/hugepages.sh@169 -- # local node 00:04:05.557 19:57:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:05.557 19:57:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:05.557 19:57:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:05.557 19:57:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:05.557 19:57:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.557 19:57:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.557 19:57:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.557 19:57:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.557 19:57:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.557 19:57:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.557 19:57:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.557 19:57:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.557 19:57:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.557 19:57:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.557 19:57:12 -- setup/hugepages.sh@83 -- # : 0 00:04:05.557 19:57:12 -- setup/hugepages.sh@84 -- # : 0 00:04:05.557 19:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:05.557 19:57:12 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:05.557 19:57:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:05.557 19:57:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:05.557 19:57:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.557 19:57:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.557 19:57:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.557 19:57:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.557 19:57:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.557 19:57:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.557 19:57:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:05.557 19:57:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.557 19:57:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:05.557 19:57:12 -- setup/hugepages.sh@78 -- # return 0 00:04:05.557 19:57:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:05.557 19:57:12 -- setup/hugepages.sh@187 -- # setup output 00:04:05.557 19:57:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.557 19:57:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.821 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.821 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.821 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.821 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.821 19:57:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:05.821 19:57:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:05.821 19:57:13 -- setup/hugepages.sh@89 -- # local node 00:04:05.821 19:57:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.821 19:57:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.821 19:57:13 -- setup/hugepages.sh@92 -- # local surp 00:04:05.821 19:57:13 -- setup/hugepages.sh@93 -- # local resv 00:04:05.821 19:57:13 -- setup/hugepages.sh@94 -- # local anon 00:04:05.821 19:57:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.821 19:57:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.821 19:57:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.821 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:05.821 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:05.821 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.821 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.821 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.821 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.821 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.821 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8988680 kB' 'MemAvailable: 10545484 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468692 kB' 'Inactive: 1422840 kB' 'Active(anon): 129484 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120800 kB' 'Mapped: 50920 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161052 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97688 kB' 'KernelStack: 6664 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.821 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.821 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.822 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:05.822 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:05.822 19:57:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.822 19:57:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.822 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.822 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:05.822 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:05.822 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.822 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.822 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.822 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.822 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.822 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8988680 kB' 'MemAvailable: 10545484 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468400 kB' 'Inactive: 1422840 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120236 kB' 'Mapped: 50920 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161052 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97688 kB' 'KernelStack: 6632 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.822 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.822 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.823 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:05.823 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:05.823 19:57:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.823 19:57:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.823 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.823 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:05.823 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:05.823 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.823 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.823 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.823 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.823 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.823 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8988680 kB' 'MemAvailable: 10545484 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468204 kB' 'Inactive: 1422840 kB' 'Active(anon): 128996 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120040 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161056 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97692 kB' 'KernelStack: 6608 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.823 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.823 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.824 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.824 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.825 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:05.825 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:05.825 19:57:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.825 19:57:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.825 nr_hugepages=512 00:04:05.825 19:57:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.825 resv_hugepages=0 00:04:05.825 19:57:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.825 surplus_hugepages=0 00:04:05.825 19:57:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.825 anon_hugepages=0 00:04:05.825 19:57:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.825 19:57:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.825 19:57:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.825 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.825 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:05.825 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:05.825 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.825 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.825 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.825 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.825 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.825 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.825 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8988680 kB' 'MemAvailable: 10545484 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468180 kB' 'Inactive: 1422840 kB' 'Active(anon): 128972 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 120276 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161052 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97688 kB' 'KernelStack: 6592 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.825 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.825 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.826 19:57:13 -- setup/common.sh@33 -- # echo 512 00:04:05.826 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:05.826 19:57:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.826 19:57:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.826 19:57:13 -- setup/hugepages.sh@27 -- # local node 00:04:05.826 19:57:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.826 19:57:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.826 19:57:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.826 19:57:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.826 19:57:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.826 19:57:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.826 19:57:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.826 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.826 19:57:13 -- setup/common.sh@18 -- # local node=0 00:04:05.826 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:05.826 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.826 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.826 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.826 19:57:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.826 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.826 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8988428 kB' 'MemUsed: 3248668 kB' 'SwapCached: 0 kB' 'Active: 468480 kB' 'Inactive: 1422840 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1772540 kB' 'Mapped: 50788 kB' 'AnonPages: 120352 kB' 'Shmem: 10492 kB' 'KernelStack: 6608 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63364 kB' 'Slab: 161036 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.826 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.826 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.827 19:57:13 -- setup/common.sh@32 -- # continue 00:04:05.827 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.139 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.139 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.140 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:06.140 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:06.140 node0=512 expecting 512 00:04:06.140 19:57:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.140 19:57:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.140 19:57:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.140 19:57:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.140 19:57:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.140 19:57:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.140 00:04:06.140 real 0m0.530s 00:04:06.140 user 0m0.223s 00:04:06.140 sys 0m0.324s 00:04:06.140 19:57:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.140 ************************************ 00:04:06.140 END TEST custom_alloc 00:04:06.140 ************************************ 00:04:06.140 19:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:06.140 19:57:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.140 19:57:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.140 19:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.140 19:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:06.140 ************************************ 00:04:06.140 START TEST no_shrink_alloc 00:04:06.140 ************************************ 00:04:06.140 19:57:13 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:06.140 19:57:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.140 19:57:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.140 19:57:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.140 19:57:13 -- setup/hugepages.sh@51 -- # shift 00:04:06.140 19:57:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.140 19:57:13 -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.140 19:57:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.140 19:57:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.140 19:57:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.140 19:57:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.140 19:57:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.140 19:57:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.140 19:57:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.140 19:57:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.140 19:57:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.140 19:57:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.140 19:57:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.140 19:57:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.140 19:57:13 -- setup/hugepages.sh@73 -- # return 0 00:04:06.140 19:57:13 -- setup/hugepages.sh@198 -- # setup output 00:04:06.140 19:57:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.140 19:57:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.403 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.403 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.403 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.403 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.403 19:57:13 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.403 19:57:13 -- setup/hugepages.sh@89 -- # local node 00:04:06.403 19:57:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.403 19:57:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.403 19:57:13 -- setup/hugepages.sh@92 -- # local surp 00:04:06.403 19:57:13 -- setup/hugepages.sh@93 -- # local resv 00:04:06.403 19:57:13 -- setup/hugepages.sh@94 -- # local anon 00:04:06.403 19:57:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.403 19:57:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.403 19:57:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.403 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:06.403 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:06.403 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.403 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.403 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.403 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.403 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.403 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934904 kB' 'MemAvailable: 9491708 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468992 kB' 'Inactive: 1422840 kB' 'Active(anon): 129784 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120968 kB' 'Mapped: 51128 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161020 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97656 kB' 'KernelStack: 6792 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55752 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.403 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.403 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.404 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:06.404 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:06.404 19:57:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.404 19:57:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.404 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.404 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:06.404 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:06.404 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.404 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.404 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.404 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.404 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.404 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934904 kB' 'MemAvailable: 9491708 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468504 kB' 'Inactive: 1422840 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 50760 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161048 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97684 kB' 'KernelStack: 6672 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.404 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.404 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.405 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.405 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:06.405 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:06.405 19:57:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.405 19:57:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.405 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.405 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:06.405 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:06.405 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.405 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.405 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.405 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.405 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.405 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.405 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934904 kB' 'MemAvailable: 9491708 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468488 kB' 'Inactive: 1422840 kB' 'Active(anon): 129280 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 50748 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161044 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97680 kB' 'KernelStack: 6624 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.406 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.406 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.407 19:57:13 -- setup/common.sh@33 -- # echo 0 00:04:06.407 19:57:13 -- setup/common.sh@33 -- # return 0 00:04:06.407 19:57:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.407 19:57:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.407 nr_hugepages=1024 00:04:06.407 19:57:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.407 resv_hugepages=0 00:04:06.407 19:57:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.407 surplus_hugepages=0 00:04:06.407 19:57:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.407 anon_hugepages=0 00:04:06.407 19:57:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.407 19:57:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.407 19:57:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.407 19:57:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.407 19:57:13 -- setup/common.sh@18 -- # local node= 00:04:06.407 19:57:13 -- setup/common.sh@19 -- # local var val 00:04:06.407 19:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.407 19:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.407 19:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.407 19:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.407 19:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.407 19:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934904 kB' 'MemAvailable: 9491708 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 468488 kB' 'Inactive: 1422840 kB' 'Active(anon): 129280 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 50748 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 161044 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97680 kB' 'KernelStack: 6624 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 323520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:13 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.407 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.408 19:57:14 -- setup/common.sh@33 -- # echo 1024 00:04:06.408 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:06.408 19:57:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.408 19:57:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.408 19:57:14 -- setup/hugepages.sh@27 -- # local node 00:04:06.408 19:57:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.408 19:57:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.408 19:57:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.408 19:57:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.408 19:57:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.408 19:57:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.408 19:57:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.408 19:57:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.408 19:57:14 -- setup/common.sh@18 -- # local node=0 00:04:06.408 19:57:14 -- setup/common.sh@19 -- # local var val 00:04:06.408 19:57:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.408 19:57:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.408 19:57:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.408 19:57:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.408 19:57:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.408 19:57:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934904 kB' 'MemUsed: 4302192 kB' 'SwapCached: 0 kB' 'Active: 468436 kB' 'Inactive: 1422840 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1772540 kB' 'Mapped: 50904 kB' 'AnonPages: 120324 kB' 'Shmem: 10492 kB' 'KernelStack: 6608 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63364 kB' 'Slab: 161036 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 19:57:14 -- setup/common.sh@33 -- # echo 0 00:04:06.409 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:06.409 19:57:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.409 19:57:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.409 node0=1024 expecting 1024 00:04:06.409 19:57:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.409 19:57:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.409 19:57:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.409 19:57:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.409 19:57:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:06.409 19:57:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:06.409 19:57:14 -- setup/hugepages.sh@202 -- # setup output 00:04:06.409 19:57:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.409 19:57:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.979 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.979 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.979 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.979 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.979 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.979 19:57:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.979 19:57:14 -- setup/hugepages.sh@89 -- # local node 00:04:06.979 19:57:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.979 19:57:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.979 19:57:14 -- setup/hugepages.sh@92 -- # local surp 00:04:06.979 19:57:14 -- setup/hugepages.sh@93 -- # local resv 00:04:06.979 19:57:14 -- setup/hugepages.sh@94 -- # local anon 00:04:06.979 19:57:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.979 19:57:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.979 19:57:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.979 19:57:14 -- setup/common.sh@18 -- # local node= 00:04:06.979 19:57:14 -- setup/common.sh@19 -- # local var val 00:04:06.979 19:57:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.979 19:57:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.979 19:57:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.979 19:57:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.979 19:57:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.979 19:57:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934896 kB' 'MemAvailable: 9491700 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 466284 kB' 'Inactive: 1422840 kB' 'Active(anon): 127076 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117976 kB' 'Mapped: 50376 kB' 'Shmem: 10492 kB' 'KReclaimable: 63364 kB' 'Slab: 160844 kB' 'SReclaimable: 63364 kB' 'SUnreclaim: 97480 kB' 'KernelStack: 6764 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.980 19:57:14 -- setup/common.sh@33 -- # echo 0 00:04:06.980 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:06.980 19:57:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.980 19:57:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.980 19:57:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.980 19:57:14 -- setup/common.sh@18 -- # local node= 00:04:06.980 19:57:14 -- setup/common.sh@19 -- # local var val 00:04:06.980 19:57:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.980 19:57:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.980 19:57:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.980 19:57:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.980 19:57:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.980 19:57:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934968 kB' 'MemAvailable: 9491764 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 465668 kB' 'Inactive: 1422840 kB' 'Active(anon): 126460 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117568 kB' 'Mapped: 49884 kB' 'Shmem: 10492 kB' 'KReclaimable: 63352 kB' 'Slab: 160804 kB' 'SReclaimable: 63352 kB' 'SUnreclaim: 97452 kB' 'KernelStack: 6608 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.980 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.980 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.981 19:57:14 -- setup/common.sh@33 -- # echo 0 00:04:06.981 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:06.981 19:57:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.981 19:57:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.981 19:57:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.981 19:57:14 -- setup/common.sh@18 -- # local node= 00:04:06.981 19:57:14 -- setup/common.sh@19 -- # local var val 00:04:06.981 19:57:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.981 19:57:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.981 19:57:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.981 19:57:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.981 19:57:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.981 19:57:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.981 19:57:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934968 kB' 'MemAvailable: 9491764 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 465664 kB' 'Inactive: 1422840 kB' 'Active(anon): 126456 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117560 kB' 'Mapped: 49940 kB' 'Shmem: 10492 kB' 'KReclaimable: 63352 kB' 'Slab: 160808 kB' 'SReclaimable: 63352 kB' 'SUnreclaim: 97456 kB' 'KernelStack: 6528 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.981 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.981 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.982 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.982 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.983 19:57:14 -- setup/common.sh@33 -- # echo 0 00:04:06.983 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:06.983 19:57:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.983 nr_hugepages=1024 00:04:06.983 resv_hugepages=0 00:04:06.983 surplus_hugepages=0 00:04:06.983 anon_hugepages=0 00:04:06.983 19:57:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.983 19:57:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.983 19:57:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.983 19:57:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.983 19:57:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.983 19:57:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.983 19:57:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.983 19:57:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.983 19:57:14 -- setup/common.sh@18 -- # local node= 00:04:06.983 19:57:14 -- setup/common.sh@19 -- # local var val 00:04:06.983 19:57:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.983 19:57:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.983 19:57:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.983 19:57:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.983 19:57:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.983 19:57:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934968 kB' 'MemAvailable: 9491764 kB' 'Buffers: 3704 kB' 'Cached: 1768836 kB' 'SwapCached: 0 kB' 'Active: 465640 kB' 'Inactive: 1422840 kB' 'Active(anon): 126432 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117520 kB' 'Mapped: 49940 kB' 'Shmem: 10492 kB' 'KReclaimable: 63352 kB' 'Slab: 160796 kB' 'SReclaimable: 63352 kB' 'SUnreclaim: 97444 kB' 'KernelStack: 6512 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 208748 kB' 'DirectMap2M: 6082560 kB' 'DirectMap1G: 8388608 kB' 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.983 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.983 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.984 19:57:14 -- setup/common.sh@33 -- # echo 1024 00:04:06.984 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:06.984 19:57:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.984 19:57:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.984 19:57:14 -- setup/hugepages.sh@27 -- # local node 00:04:06.984 19:57:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.984 19:57:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.984 19:57:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.984 19:57:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.984 19:57:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.984 19:57:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.984 19:57:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.984 19:57:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.984 19:57:14 -- setup/common.sh@18 -- # local node=0 00:04:06.984 19:57:14 -- setup/common.sh@19 -- # local var val 00:04:06.984 19:57:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.984 19:57:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.984 19:57:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.984 19:57:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.984 19:57:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.984 19:57:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7934968 kB' 'MemUsed: 4302128 kB' 'SwapCached: 0 kB' 'Active: 465776 kB' 'Inactive: 1422840 kB' 'Active(anon): 126568 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1772540 kB' 'Mapped: 49940 kB' 'AnonPages: 117648 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63352 kB' 'Slab: 160792 kB' 'SReclaimable: 63352 kB' 'SUnreclaim: 97440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.984 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.984 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # continue 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 19:57:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 19:57:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.985 19:57:14 -- setup/common.sh@33 -- # echo 0 00:04:06.985 19:57:14 -- setup/common.sh@33 -- # return 0 00:04:07.244 node0=1024 expecting 1024 00:04:07.244 19:57:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.244 19:57:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.244 19:57:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.244 19:57:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.244 19:57:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.244 ************************************ 00:04:07.244 END TEST no_shrink_alloc 00:04:07.244 ************************************ 00:04:07.244 19:57:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.244 00:04:07.244 real 0m1.118s 00:04:07.244 user 0m0.490s 00:04:07.244 sys 0m0.647s 00:04:07.244 19:57:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.244 19:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:07.244 19:57:14 -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.244 19:57:14 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.244 19:57:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.244 19:57:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.244 19:57:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.244 19:57:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.244 19:57:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.244 19:57:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.244 19:57:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.244 ************************************ 00:04:07.244 END TEST hugepages 00:04:07.244 ************************************ 00:04:07.244 00:04:07.244 real 0m5.134s 00:04:07.244 user 0m2.102s 00:04:07.244 sys 0m2.911s 00:04:07.244 19:57:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.244 19:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:07.244 19:57:14 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.244 19:57:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.244 19:57:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.244 19:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:07.244 ************************************ 00:04:07.244 START TEST driver 00:04:07.244 ************************************ 00:04:07.244 19:57:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.244 * Looking for test storage... 00:04:07.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.244 19:57:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.244 19:57:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.244 19:57:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.244 19:57:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.244 19:57:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.244 19:57:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.244 19:57:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.244 19:57:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.244 19:57:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.244 19:57:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.244 19:57:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.244 19:57:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.244 19:57:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.244 19:57:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.244 19:57:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.244 19:57:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.244 19:57:14 -- scripts/common.sh@344 -- # : 1 00:04:07.244 19:57:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.244 19:57:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.244 19:57:14 -- scripts/common.sh@364 -- # decimal 1 00:04:07.244 19:57:14 -- scripts/common.sh@352 -- # local d=1 00:04:07.244 19:57:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.244 19:57:14 -- scripts/common.sh@354 -- # echo 1 00:04:07.244 19:57:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.244 19:57:14 -- scripts/common.sh@365 -- # decimal 2 00:04:07.244 19:57:14 -- scripts/common.sh@352 -- # local d=2 00:04:07.244 19:57:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.244 19:57:14 -- scripts/common.sh@354 -- # echo 2 00:04:07.244 19:57:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.244 19:57:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.244 19:57:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.244 19:57:14 -- scripts/common.sh@367 -- # return 0 00:04:07.244 19:57:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.244 19:57:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.244 --rc genhtml_branch_coverage=1 00:04:07.244 --rc genhtml_function_coverage=1 00:04:07.244 --rc genhtml_legend=1 00:04:07.245 --rc geninfo_all_blocks=1 00:04:07.245 --rc geninfo_unexecuted_blocks=1 00:04:07.245 00:04:07.245 ' 00:04:07.245 19:57:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.245 --rc genhtml_branch_coverage=1 00:04:07.245 --rc genhtml_function_coverage=1 00:04:07.245 --rc genhtml_legend=1 00:04:07.245 --rc geninfo_all_blocks=1 00:04:07.245 --rc geninfo_unexecuted_blocks=1 00:04:07.245 00:04:07.245 ' 00:04:07.245 19:57:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.245 --rc genhtml_branch_coverage=1 00:04:07.245 --rc genhtml_function_coverage=1 00:04:07.245 --rc genhtml_legend=1 00:04:07.245 --rc geninfo_all_blocks=1 00:04:07.245 --rc geninfo_unexecuted_blocks=1 00:04:07.245 00:04:07.245 ' 00:04:07.245 19:57:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.245 --rc genhtml_branch_coverage=1 00:04:07.245 --rc genhtml_function_coverage=1 00:04:07.245 --rc genhtml_legend=1 00:04:07.245 --rc geninfo_all_blocks=1 00:04:07.245 --rc geninfo_unexecuted_blocks=1 00:04:07.245 00:04:07.245 ' 00:04:07.245 19:57:14 -- setup/driver.sh@68 -- # setup reset 00:04:07.245 19:57:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.245 19:57:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.805 19:57:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.805 19:57:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.805 19:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.805 19:57:20 -- common/autotest_common.sh@10 -- # set +x 00:04:13.805 ************************************ 00:04:13.805 START TEST guess_driver 00:04:13.805 ************************************ 00:04:13.805 19:57:20 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:13.805 19:57:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.805 19:57:20 -- setup/driver.sh@47 -- # local fail=0 00:04:13.805 19:57:20 -- setup/driver.sh@49 -- # pick_driver 00:04:13.805 19:57:20 -- setup/driver.sh@36 -- # vfio 00:04:13.805 19:57:20 -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.805 19:57:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.805 19:57:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.805 19:57:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.805 19:57:20 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:13.805 19:57:20 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:13.805 19:57:20 -- setup/driver.sh@32 -- # return 1 00:04:13.805 19:57:20 -- setup/driver.sh@38 -- # uio 00:04:13.805 19:57:20 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:13.805 19:57:20 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:13.805 19:57:20 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:13.805 19:57:20 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:13.805 19:57:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:13.805 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:13.805 19:57:20 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:13.805 Looking for driver=uio_pci_generic 00:04:13.805 19:57:20 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:13.805 19:57:20 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.805 19:57:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:13.805 19:57:20 -- setup/driver.sh@45 -- # setup output config 00:04:13.805 19:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.805 19:57:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.805 19:57:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.805 19:57:21 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:13.805 19:57:21 -- setup/driver.sh@58 -- # continue 00:04:13.805 19:57:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.805 19:57:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.805 19:57:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.805 19:57:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.805 19:57:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.805 19:57:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.805 19:57:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.805 19:57:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.805 19:57:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.805 19:57:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.064 19:57:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.064 19:57:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:14.064 19:57:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.064 19:57:21 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.064 19:57:21 -- setup/driver.sh@65 -- # setup reset 00:04:14.064 19:57:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.064 19:57:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.649 00:04:20.649 real 0m6.758s 00:04:20.649 user 0m0.652s 00:04:20.649 sys 0m1.150s 00:04:20.649 19:57:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.649 ************************************ 00:04:20.649 END TEST guess_driver 00:04:20.649 ************************************ 00:04:20.649 19:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:20.649 ************************************ 00:04:20.649 END TEST driver 00:04:20.649 ************************************ 00:04:20.649 00:04:20.649 real 0m12.619s 00:04:20.649 user 0m0.979s 00:04:20.649 sys 0m1.779s 00:04:20.649 19:57:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.649 19:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:20.649 19:57:27 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:20.649 19:57:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.649 19:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.649 19:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:20.649 ************************************ 00:04:20.649 START TEST devices 00:04:20.649 ************************************ 00:04:20.649 19:57:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:20.649 * Looking for test storage... 00:04:20.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.649 19:57:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:20.649 19:57:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:20.649 19:57:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:20.649 19:57:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:20.649 19:57:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:20.649 19:57:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:20.649 19:57:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:20.649 19:57:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:20.649 19:57:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:20.649 19:57:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.649 19:57:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:20.649 19:57:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:20.649 19:57:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:20.649 19:57:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:20.649 19:57:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:20.649 19:57:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:20.649 19:57:27 -- scripts/common.sh@344 -- # : 1 00:04:20.649 19:57:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:20.649 19:57:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.649 19:57:27 -- scripts/common.sh@364 -- # decimal 1 00:04:20.649 19:57:27 -- scripts/common.sh@352 -- # local d=1 00:04:20.649 19:57:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.649 19:57:27 -- scripts/common.sh@354 -- # echo 1 00:04:20.649 19:57:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:20.649 19:57:27 -- scripts/common.sh@365 -- # decimal 2 00:04:20.649 19:57:27 -- scripts/common.sh@352 -- # local d=2 00:04:20.649 19:57:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.649 19:57:27 -- scripts/common.sh@354 -- # echo 2 00:04:20.649 19:57:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:20.649 19:57:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:20.649 19:57:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:20.649 19:57:27 -- scripts/common.sh@367 -- # return 0 00:04:20.649 19:57:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.649 19:57:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:20.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.649 --rc genhtml_branch_coverage=1 00:04:20.649 --rc genhtml_function_coverage=1 00:04:20.649 --rc genhtml_legend=1 00:04:20.649 --rc geninfo_all_blocks=1 00:04:20.649 --rc geninfo_unexecuted_blocks=1 00:04:20.649 00:04:20.649 ' 00:04:20.649 19:57:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:20.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.649 --rc genhtml_branch_coverage=1 00:04:20.649 --rc genhtml_function_coverage=1 00:04:20.649 --rc genhtml_legend=1 00:04:20.649 --rc geninfo_all_blocks=1 00:04:20.649 --rc geninfo_unexecuted_blocks=1 00:04:20.649 00:04:20.649 ' 00:04:20.649 19:57:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:20.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.649 --rc genhtml_branch_coverage=1 00:04:20.649 --rc genhtml_function_coverage=1 00:04:20.649 --rc genhtml_legend=1 00:04:20.649 --rc geninfo_all_blocks=1 00:04:20.649 --rc geninfo_unexecuted_blocks=1 00:04:20.649 00:04:20.649 ' 00:04:20.649 19:57:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:20.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.649 --rc genhtml_branch_coverage=1 00:04:20.649 --rc genhtml_function_coverage=1 00:04:20.649 --rc genhtml_legend=1 00:04:20.649 --rc geninfo_all_blocks=1 00:04:20.649 --rc geninfo_unexecuted_blocks=1 00:04:20.649 00:04:20.649 ' 00:04:20.649 19:57:27 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.649 19:57:27 -- setup/devices.sh@192 -- # setup reset 00:04:20.649 19:57:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.649 19:57:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.220 19:57:28 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:21.220 19:57:28 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:21.220 19:57:28 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:21.220 19:57:28 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:21.220 19:57:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:21.220 19:57:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:21.220 19:57:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:21.220 19:57:28 -- setup/devices.sh@196 -- # blocks=() 00:04:21.220 19:57:28 -- setup/devices.sh@196 -- # declare -a blocks 00:04:21.221 19:57:28 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:21.221 19:57:28 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:21.221 19:57:28 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:21.221 19:57:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:21.221 19:57:28 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:21.221 19:57:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:21.221 19:57:28 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:21.221 19:57:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:21.221 No valid GPT data, bailing 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # pt= 00:04:21.221 19:57:28 -- scripts/common.sh@394 -- # return 1 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:21.221 19:57:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:21.221 19:57:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:21.221 19:57:28 -- setup/common.sh@80 -- # echo 1073741824 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:21.221 19:57:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:21.221 19:57:28 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:21.221 19:57:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:21.221 19:57:28 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:21.221 19:57:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:21.221 No valid GPT data, bailing 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # pt= 00:04:21.221 19:57:28 -- scripts/common.sh@394 -- # return 1 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:21.221 19:57:28 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:21.221 19:57:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:21.221 19:57:28 -- setup/common.sh@80 -- # echo 4294967296 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.221 19:57:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.221 19:57:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:21.221 19:57:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:21.221 19:57:28 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:21.221 19:57:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:21.221 19:57:28 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:21.221 19:57:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:21.221 No valid GPT data, bailing 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # pt= 00:04:21.221 19:57:28 -- scripts/common.sh@394 -- # return 1 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:21.221 19:57:28 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:21.221 19:57:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:21.221 19:57:28 -- setup/common.sh@80 -- # echo 4294967296 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.221 19:57:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.221 19:57:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:21.221 19:57:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:21.221 19:57:28 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:21.221 19:57:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:21.221 19:57:28 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:21.221 19:57:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:21.221 No valid GPT data, bailing 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # pt= 00:04:21.221 19:57:28 -- scripts/common.sh@394 -- # return 1 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:21.221 19:57:28 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:21.221 19:57:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:21.221 19:57:28 -- setup/common.sh@80 -- # echo 4294967296 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.221 19:57:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.221 19:57:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:21.221 19:57:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:21.221 19:57:28 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:21.221 19:57:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:21.221 19:57:28 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:21.221 19:57:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:21.221 No valid GPT data, bailing 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:21.221 19:57:28 -- scripts/common.sh@393 -- # pt= 00:04:21.221 19:57:28 -- scripts/common.sh@394 -- # return 1 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:21.221 19:57:28 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:21.221 19:57:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:21.221 19:57:28 -- setup/common.sh@80 -- # echo 6343335936 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:21.221 19:57:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.221 19:57:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:21.221 19:57:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:21.221 19:57:28 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:21.221 19:57:28 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:21.221 19:57:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:21.221 19:57:28 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:21.221 19:57:28 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:21.221 19:57:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:21.482 No valid GPT data, bailing 00:04:21.482 19:57:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:21.482 19:57:28 -- scripts/common.sh@393 -- # pt= 00:04:21.482 19:57:28 -- scripts/common.sh@394 -- # return 1 00:04:21.482 19:57:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:21.482 19:57:28 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:21.482 19:57:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:21.482 19:57:28 -- setup/common.sh@80 -- # echo 5368709120 00:04:21.482 19:57:28 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:21.482 19:57:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.482 19:57:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:21.482 19:57:28 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:21.482 19:57:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:21.482 19:57:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:21.482 19:57:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.482 19:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.482 19:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.482 ************************************ 00:04:21.482 START TEST nvme_mount 00:04:21.482 ************************************ 00:04:21.482 19:57:28 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:21.482 19:57:28 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:21.482 19:57:28 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:21.482 19:57:28 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.482 19:57:28 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.482 19:57:28 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:21.482 19:57:28 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:21.482 19:57:28 -- setup/common.sh@40 -- # local part_no=1 00:04:21.482 19:57:28 -- setup/common.sh@41 -- # local size=1073741824 00:04:21.482 19:57:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.482 19:57:28 -- setup/common.sh@44 -- # parts=() 00:04:21.482 19:57:28 -- setup/common.sh@44 -- # local parts 00:04:21.482 19:57:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.482 19:57:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.482 19:57:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.482 19:57:28 -- setup/common.sh@46 -- # (( part++ )) 00:04:21.482 19:57:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.482 19:57:28 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:21.482 19:57:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:21.482 19:57:28 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:22.417 Creating new GPT entries in memory. 00:04:22.417 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.417 other utilities. 00:04:22.417 19:57:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.417 19:57:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.417 19:57:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.417 19:57:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.417 19:57:29 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:23.358 Creating new GPT entries in memory. 00:04:23.358 The operation has completed successfully. 00:04:23.358 19:57:30 -- setup/common.sh@57 -- # (( part++ )) 00:04:23.358 19:57:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.358 19:57:30 -- setup/common.sh@62 -- # wait 53713 00:04:23.619 19:57:31 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.619 19:57:31 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:23.619 19:57:31 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.619 19:57:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:23.619 19:57:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:23.619 19:57:31 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.619 19:57:31 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.619 19:57:31 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:23.619 19:57:31 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:23.619 19:57:31 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.619 19:57:31 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.619 19:57:31 -- setup/devices.sh@53 -- # local found=0 00:04:23.619 19:57:31 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.619 19:57:31 -- setup/devices.sh@56 -- # : 00:04:23.619 19:57:31 -- setup/devices.sh@59 -- # local pci status 00:04:23.619 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.619 19:57:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:23.619 19:57:31 -- setup/devices.sh@47 -- # setup output config 00:04:23.619 19:57:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.619 19:57:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.619 19:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.619 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.619 19:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.619 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.923 19:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.923 19:57:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:23.923 19:57:31 -- setup/devices.sh@63 -- # found=1 00:04:23.923 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.923 19:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.923 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.923 19:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.923 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.210 19:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:24.210 19:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.210 19:57:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.210 19:57:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:24.210 19:57:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.210 19:57:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.210 19:57:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.210 19:57:31 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.210 19:57:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.210 19:57:31 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.210 19:57:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:24.210 19:57:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:24.210 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.210 19:57:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:24.210 19:57:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:24.478 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.478 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.478 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.478 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:24.478 19:57:31 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:24.478 19:57:31 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:24.478 19:57:31 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.478 19:57:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:24.478 19:57:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:24.478 19:57:32 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.478 19:57:32 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.478 19:57:32 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:24.478 19:57:32 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:24.478 19:57:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.478 19:57:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.478 19:57:32 -- setup/devices.sh@53 -- # local found=0 00:04:24.478 19:57:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.478 19:57:32 -- setup/devices.sh@56 -- # : 00:04:24.478 19:57:32 -- setup/devices.sh@59 -- # local pci status 00:04:24.478 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.478 19:57:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:24.478 19:57:32 -- setup/devices.sh@47 -- # setup output config 00:04:24.478 19:57:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.478 19:57:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.478 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:24.478 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.739 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:24.739 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.000 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.000 19:57:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:25.000 19:57:32 -- setup/devices.sh@63 -- # found=1 00:04:25.000 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.000 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.000 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.000 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.000 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.000 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.000 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.262 19:57:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.262 19:57:32 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:25.262 19:57:32 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.262 19:57:32 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.262 19:57:32 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.262 19:57:32 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.262 19:57:32 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:25.262 19:57:32 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:25.262 19:57:32 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:25.262 19:57:32 -- setup/devices.sh@50 -- # local mount_point= 00:04:25.262 19:57:32 -- setup/devices.sh@51 -- # local test_file= 00:04:25.262 19:57:32 -- setup/devices.sh@53 -- # local found=0 00:04:25.262 19:57:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.262 19:57:32 -- setup/devices.sh@59 -- # local pci status 00:04:25.262 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.262 19:57:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:25.262 19:57:32 -- setup/devices.sh@47 -- # setup output config 00:04:25.262 19:57:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.262 19:57:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.262 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.262 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.262 19:57:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.262 19:57:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.522 19:57:33 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.523 19:57:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:25.523 19:57:33 -- setup/devices.sh@63 -- # found=1 00:04:25.523 19:57:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.523 19:57:33 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.523 19:57:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.781 19:57:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.781 19:57:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.781 19:57:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.781 19:57:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.781 19:57:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.781 19:57:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.781 19:57:33 -- setup/devices.sh@68 -- # return 0 00:04:25.781 19:57:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:25.781 19:57:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.781 19:57:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:25.781 19:57:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:25.781 19:57:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:25.781 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.781 00:04:25.781 real 0m4.464s 00:04:25.781 user 0m0.887s 00:04:25.781 sys 0m1.190s 00:04:25.781 19:57:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.781 19:57:33 -- common/autotest_common.sh@10 -- # set +x 00:04:25.781 ************************************ 00:04:25.781 END TEST nvme_mount 00:04:25.781 ************************************ 00:04:25.781 19:57:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:25.781 19:57:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.781 19:57:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.781 19:57:33 -- common/autotest_common.sh@10 -- # set +x 00:04:26.038 ************************************ 00:04:26.038 START TEST dm_mount 00:04:26.038 ************************************ 00:04:26.038 19:57:33 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:26.038 19:57:33 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:26.038 19:57:33 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:26.038 19:57:33 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:26.038 19:57:33 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:26.038 19:57:33 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:26.038 19:57:33 -- setup/common.sh@40 -- # local part_no=2 00:04:26.038 19:57:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:26.038 19:57:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.038 19:57:33 -- setup/common.sh@44 -- # parts=() 00:04:26.038 19:57:33 -- setup/common.sh@44 -- # local parts 00:04:26.038 19:57:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.038 19:57:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.038 19:57:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.038 19:57:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.038 19:57:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.038 19:57:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.038 19:57:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.038 19:57:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.038 19:57:33 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:26.038 19:57:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:26.038 19:57:33 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:26.974 Creating new GPT entries in memory. 00:04:26.974 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.974 other utilities. 00:04:26.974 19:57:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.974 19:57:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.974 19:57:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.974 19:57:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.974 19:57:34 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:27.908 Creating new GPT entries in memory. 00:04:27.908 The operation has completed successfully. 00:04:27.908 19:57:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:27.908 19:57:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.908 19:57:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.908 19:57:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.908 19:57:35 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:29.284 The operation has completed successfully. 00:04:29.284 19:57:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.284 19:57:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.284 19:57:36 -- setup/common.sh@62 -- # wait 54336 00:04:29.284 19:57:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:29.284 19:57:36 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.284 19:57:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:29.284 19:57:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:29.284 19:57:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:29.284 19:57:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.284 19:57:36 -- setup/devices.sh@161 -- # break 00:04:29.284 19:57:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.284 19:57:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:29.284 19:57:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:29.284 19:57:36 -- setup/devices.sh@166 -- # dm=dm-0 00:04:29.284 19:57:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:29.284 19:57:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:29.284 19:57:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.284 19:57:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:29.284 19:57:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.284 19:57:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.284 19:57:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:29.284 19:57:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.284 19:57:36 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:29.284 19:57:36 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:29.284 19:57:36 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:29.284 19:57:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.284 19:57:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:29.284 19:57:36 -- setup/devices.sh@53 -- # local found=0 00:04:29.284 19:57:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.284 19:57:36 -- setup/devices.sh@56 -- # : 00:04:29.284 19:57:36 -- setup/devices.sh@59 -- # local pci status 00:04:29.284 19:57:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.284 19:57:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:29.284 19:57:36 -- setup/devices.sh@47 -- # setup output config 00:04:29.284 19:57:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.284 19:57:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.284 19:57:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.284 19:57:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.284 19:57:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.284 19:57:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.542 19:57:36 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.542 19:57:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:29.542 19:57:36 -- setup/devices.sh@63 -- # found=1 00:04:29.542 19:57:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.542 19:57:36 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.542 19:57:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.542 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.542 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.542 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.543 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.801 19:57:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.801 19:57:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:29.801 19:57:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.801 19:57:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.801 19:57:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:29.801 19:57:37 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.801 19:57:37 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:29.801 19:57:37 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:29.801 19:57:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:29.801 19:57:37 -- setup/devices.sh@50 -- # local mount_point= 00:04:29.801 19:57:37 -- setup/devices.sh@51 -- # local test_file= 00:04:29.801 19:57:37 -- setup/devices.sh@53 -- # local found=0 00:04:29.801 19:57:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.801 19:57:37 -- setup/devices.sh@59 -- # local pci status 00:04:29.801 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.801 19:57:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:29.801 19:57:37 -- setup/devices.sh@47 -- # setup output config 00:04:29.801 19:57:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.801 19:57:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.801 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.801 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.059 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.059 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.059 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.059 19:57:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:30.059 19:57:37 -- setup/devices.sh@63 -- # found=1 00:04:30.059 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.059 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.059 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.317 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.317 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.317 19:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.317 19:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.317 19:57:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.317 19:57:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.317 19:57:37 -- setup/devices.sh@68 -- # return 0 00:04:30.317 19:57:37 -- setup/devices.sh@187 -- # cleanup_dm 00:04:30.317 19:57:37 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.317 19:57:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:30.317 19:57:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:30.317 19:57:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:30.317 19:57:37 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:30.317 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.317 19:57:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:30.317 19:57:37 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:30.317 00:04:30.317 real 0m4.518s 00:04:30.317 user 0m0.596s 00:04:30.317 sys 0m0.831s 00:04:30.317 19:57:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.317 19:57:37 -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 ************************************ 00:04:30.317 END TEST dm_mount 00:04:30.317 ************************************ 00:04:30.575 19:57:37 -- setup/devices.sh@1 -- # cleanup 00:04:30.575 19:57:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:30.575 19:57:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.575 19:57:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:30.575 19:57:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:30.575 19:57:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:30.575 19:57:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:30.833 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.833 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.833 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.833 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:30.833 19:57:38 -- setup/devices.sh@12 -- # cleanup_dm 00:04:30.833 19:57:38 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.833 19:57:38 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:30.833 19:57:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:30.833 19:57:38 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:30.833 19:57:38 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:30.833 19:57:38 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:30.833 00:04:30.833 real 0m10.872s 00:04:30.833 user 0m2.248s 00:04:30.833 sys 0m2.735s 00:04:30.833 19:57:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.833 ************************************ 00:04:30.833 END TEST devices 00:04:30.833 ************************************ 00:04:30.833 19:57:38 -- common/autotest_common.sh@10 -- # set +x 00:04:30.833 00:04:30.833 real 0m39.043s 00:04:30.833 user 0m7.551s 00:04:30.833 sys 0m10.472s 00:04:30.833 ************************************ 00:04:30.833 19:57:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.834 19:57:38 -- common/autotest_common.sh@10 -- # set +x 00:04:30.834 END TEST setup.sh 00:04:30.834 ************************************ 00:04:30.834 19:57:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:30.834 Hugepages 00:04:30.834 node hugesize free / total 00:04:30.834 node0 1048576kB 0 / 0 00:04:31.093 node0 2048kB 2048 / 2048 00:04:31.093 00:04:31.093 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.093 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:31.093 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:31.093 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:31.093 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:31.351 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:31.351 19:57:38 -- spdk/autotest.sh@128 -- # uname -s 00:04:31.351 19:57:38 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:31.351 19:57:38 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:31.351 19:57:38 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.238 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.238 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.238 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.238 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.238 19:57:39 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:33.171 19:57:40 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:33.171 19:57:40 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:33.171 19:57:40 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.171 19:57:40 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:33.171 19:57:40 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:33.171 19:57:40 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:33.171 19:57:40 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.171 19:57:40 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.171 19:57:40 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:33.429 19:57:40 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:33.429 19:57:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:33.429 19:57:40 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.687 Waiting for block devices as requested 00:04:33.687 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.945 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.946 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.946 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.210 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:39.210 19:57:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:39.210 19:57:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:39.210 19:57:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:39.210 19:57:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:39.210 19:57:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1552 -- # continue 00:04:39.210 19:57:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:39.210 19:57:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:39.210 19:57:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:39.210 19:57:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:39.210 19:57:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:39.210 19:57:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:39.210 19:57:46 -- common/autotest_common.sh@1552 -- # continue 00:04:39.210 19:57:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:39.210 19:57:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.210 19:57:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:39.211 19:57:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:39.211 19:57:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:39.211 19:57:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:39.211 19:57:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:39.211 19:57:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:39.211 19:57:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:39.211 19:57:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1552 -- # continue 00:04:39.211 19:57:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:39.211 19:57:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:39.211 19:57:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.211 19:57:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:39.211 19:57:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:39.211 19:57:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:39.211 19:57:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:39.211 19:57:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:39.211 19:57:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:39.211 19:57:46 -- common/autotest_common.sh@1552 -- # continue 00:04:39.211 19:57:46 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:39.211 19:57:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.211 19:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:39.211 19:57:46 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:39.211 19:57:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.211 19:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:39.211 19:57:46 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.035 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.035 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.035 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.035 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.035 19:57:47 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:40.035 19:57:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.035 19:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.296 19:57:47 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:40.296 19:57:47 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:40.296 19:57:47 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.296 19:57:47 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:40.296 19:57:47 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:40.296 19:57:47 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:40.296 19:57:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:40.296 19:57:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:40.296 19:57:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.296 19:57:47 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.296 19:57:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:40.296 19:57:47 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:40.296 19:57:47 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:40.296 19:57:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:40.296 19:57:47 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.296 19:57:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:40.296 19:57:47 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.296 19:57:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:40.296 19:57:47 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.296 19:57:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:40.296 19:57:47 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:40.296 19:57:47 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.296 19:57:47 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:40.296 19:57:47 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:40.296 19:57:47 -- common/autotest_common.sh@1588 -- # return 0 00:04:40.296 19:57:47 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:40.296 19:57:47 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:40.296 19:57:47 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:40.296 19:57:47 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:40.296 19:57:47 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:40.296 19:57:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.296 19:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.296 19:57:47 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:40.296 19:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.296 19:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.296 19:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.296 ************************************ 00:04:40.296 START TEST env 00:04:40.296 ************************************ 00:04:40.296 19:57:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:40.296 * Looking for test storage... 00:04:40.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:40.296 19:57:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:40.296 19:57:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:40.296 19:57:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:40.296 19:57:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:40.296 19:57:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:40.296 19:57:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:40.296 19:57:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:40.296 19:57:47 -- scripts/common.sh@335 -- # IFS=.-: 00:04:40.296 19:57:47 -- scripts/common.sh@335 -- # read -ra ver1 00:04:40.296 19:57:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.296 19:57:47 -- scripts/common.sh@336 -- # read -ra ver2 00:04:40.296 19:57:47 -- scripts/common.sh@337 -- # local 'op=<' 00:04:40.296 19:57:47 -- scripts/common.sh@339 -- # ver1_l=2 00:04:40.296 19:57:47 -- scripts/common.sh@340 -- # ver2_l=1 00:04:40.296 19:57:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:40.296 19:57:47 -- scripts/common.sh@343 -- # case "$op" in 00:04:40.296 19:57:47 -- scripts/common.sh@344 -- # : 1 00:04:40.296 19:57:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:40.296 19:57:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.296 19:57:47 -- scripts/common.sh@364 -- # decimal 1 00:04:40.296 19:57:47 -- scripts/common.sh@352 -- # local d=1 00:04:40.296 19:57:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.296 19:57:47 -- scripts/common.sh@354 -- # echo 1 00:04:40.296 19:57:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:40.296 19:57:47 -- scripts/common.sh@365 -- # decimal 2 00:04:40.296 19:57:47 -- scripts/common.sh@352 -- # local d=2 00:04:40.296 19:57:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.296 19:57:47 -- scripts/common.sh@354 -- # echo 2 00:04:40.296 19:57:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:40.296 19:57:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:40.296 19:57:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:40.296 19:57:47 -- scripts/common.sh@367 -- # return 0 00:04:40.296 19:57:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.296 19:57:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:40.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.296 --rc genhtml_branch_coverage=1 00:04:40.296 --rc genhtml_function_coverage=1 00:04:40.296 --rc genhtml_legend=1 00:04:40.296 --rc geninfo_all_blocks=1 00:04:40.296 --rc geninfo_unexecuted_blocks=1 00:04:40.296 00:04:40.296 ' 00:04:40.296 19:57:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:40.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.296 --rc genhtml_branch_coverage=1 00:04:40.296 --rc genhtml_function_coverage=1 00:04:40.296 --rc genhtml_legend=1 00:04:40.296 --rc geninfo_all_blocks=1 00:04:40.296 --rc geninfo_unexecuted_blocks=1 00:04:40.296 00:04:40.296 ' 00:04:40.297 19:57:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.297 --rc genhtml_branch_coverage=1 00:04:40.297 --rc genhtml_function_coverage=1 00:04:40.297 --rc genhtml_legend=1 00:04:40.297 --rc geninfo_all_blocks=1 00:04:40.297 --rc geninfo_unexecuted_blocks=1 00:04:40.297 00:04:40.297 ' 00:04:40.297 19:57:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.297 --rc genhtml_branch_coverage=1 00:04:40.297 --rc genhtml_function_coverage=1 00:04:40.297 --rc genhtml_legend=1 00:04:40.297 --rc geninfo_all_blocks=1 00:04:40.297 --rc geninfo_unexecuted_blocks=1 00:04:40.297 00:04:40.297 ' 00:04:40.297 19:57:47 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:40.297 19:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.297 19:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.297 19:57:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.297 ************************************ 00:04:40.297 START TEST env_memory 00:04:40.297 ************************************ 00:04:40.297 19:57:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:40.297 00:04:40.297 00:04:40.297 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.297 http://cunit.sourceforge.net/ 00:04:40.297 00:04:40.297 00:04:40.297 Suite: memory 00:04:40.556 Test: alloc and free memory map ...[2024-12-16 19:57:47.963868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:40.556 passed 00:04:40.556 Test: mem map translation ...[2024-12-16 19:57:48.002496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:40.556 [2024-12-16 19:57:48.002533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:40.556 [2024-12-16 19:57:48.002591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:40.556 [2024-12-16 19:57:48.002605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:40.556 passed 00:04:40.556 Test: mem map registration ...[2024-12-16 19:57:48.070626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:40.556 [2024-12-16 19:57:48.070661] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:40.556 passed 00:04:40.556 Test: mem map adjacent registrations ...passed 00:04:40.556 00:04:40.556 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.556 suites 1 1 n/a 0 0 00:04:40.556 tests 4 4 4 0 0 00:04:40.556 asserts 152 152 152 0 n/a 00:04:40.556 00:04:40.556 Elapsed time = 0.233 seconds 00:04:40.556 00:04:40.556 real 0m0.266s 00:04:40.556 user 0m0.245s 00:04:40.556 sys 0m0.014s 00:04:40.556 19:57:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.556 19:57:48 -- common/autotest_common.sh@10 -- # set +x 00:04:40.556 ************************************ 00:04:40.556 END TEST env_memory 00:04:40.556 ************************************ 00:04:40.814 19:57:48 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:40.814 19:57:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.814 19:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.814 19:57:48 -- common/autotest_common.sh@10 -- # set +x 00:04:40.814 ************************************ 00:04:40.814 START TEST env_vtophys 00:04:40.814 ************************************ 00:04:40.814 19:57:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:40.814 EAL: lib.eal log level changed from notice to debug 00:04:40.814 EAL: Detected lcore 0 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 1 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 2 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 3 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 4 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 5 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 6 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 7 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 8 as core 0 on socket 0 00:04:40.814 EAL: Detected lcore 9 as core 0 on socket 0 00:04:40.814 EAL: Maximum logical cores by configuration: 128 00:04:40.814 EAL: Detected CPU lcores: 10 00:04:40.814 EAL: Detected NUMA nodes: 1 00:04:40.814 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:40.814 EAL: Detected shared linkage of DPDK 00:04:40.814 EAL: No shared files mode enabled, IPC will be disabled 00:04:40.814 EAL: Selected IOVA mode 'PA' 00:04:40.814 EAL: Probing VFIO support... 00:04:40.814 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:40.814 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:40.814 EAL: Ask a virtual area of 0x2e000 bytes 00:04:40.814 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:40.814 EAL: Setting up physically contiguous memory... 00:04:40.814 EAL: Setting maximum number of open files to 524288 00:04:40.814 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:40.814 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:40.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.814 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:40.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.814 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:40.814 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:40.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.814 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:40.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.814 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:40.814 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:40.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.814 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:40.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.814 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:40.814 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:40.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.814 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:40.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.814 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:40.814 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:40.814 EAL: Hugepages will be freed exactly as allocated. 00:04:40.814 EAL: No shared files mode enabled, IPC is disabled 00:04:40.814 EAL: No shared files mode enabled, IPC is disabled 00:04:40.814 EAL: TSC frequency is ~2600000 KHz 00:04:40.814 EAL: Main lcore 0 is ready (tid=7f5d63af8a40;cpuset=[0]) 00:04:40.814 EAL: Trying to obtain current memory policy. 00:04:40.814 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.814 EAL: Restoring previous memory policy: 0 00:04:40.814 EAL: request: mp_malloc_sync 00:04:40.814 EAL: No shared files mode enabled, IPC is disabled 00:04:40.814 EAL: Heap on socket 0 was expanded by 2MB 00:04:40.814 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:40.814 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:40.814 EAL: Mem event callback 'spdk:(nil)' registered 00:04:40.814 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:40.814 00:04:40.814 00:04:40.814 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.814 http://cunit.sourceforge.net/ 00:04:40.814 00:04:40.814 00:04:40.814 Suite: components_suite 00:04:41.073 Test: vtophys_malloc_test ...passed 00:04:41.073 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:41.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.073 EAL: Restoring previous memory policy: 4 00:04:41.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.073 EAL: request: mp_malloc_sync 00:04:41.073 EAL: No shared files mode enabled, IPC is disabled 00:04:41.073 EAL: Heap on socket 0 was expanded by 4MB 00:04:41.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.073 EAL: request: mp_malloc_sync 00:04:41.073 EAL: No shared files mode enabled, IPC is disabled 00:04:41.073 EAL: Heap on socket 0 was shrunk by 4MB 00:04:41.073 EAL: Trying to obtain current memory policy. 00:04:41.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.073 EAL: Restoring previous memory policy: 4 00:04:41.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.073 EAL: request: mp_malloc_sync 00:04:41.073 EAL: No shared files mode enabled, IPC is disabled 00:04:41.073 EAL: Heap on socket 0 was expanded by 6MB 00:04:41.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.073 EAL: request: mp_malloc_sync 00:04:41.073 EAL: No shared files mode enabled, IPC is disabled 00:04:41.073 EAL: Heap on socket 0 was shrunk by 6MB 00:04:41.073 EAL: Trying to obtain current memory policy. 00:04:41.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.073 EAL: Restoring previous memory policy: 4 00:04:41.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.073 EAL: request: mp_malloc_sync 00:04:41.073 EAL: No shared files mode enabled, IPC is disabled 00:04:41.073 EAL: Heap on socket 0 was expanded by 10MB 00:04:41.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was shrunk by 10MB 00:04:41.332 EAL: Trying to obtain current memory policy. 00:04:41.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.332 EAL: Restoring previous memory policy: 4 00:04:41.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was expanded by 18MB 00:04:41.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was shrunk by 18MB 00:04:41.332 EAL: Trying to obtain current memory policy. 00:04:41.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.332 EAL: Restoring previous memory policy: 4 00:04:41.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was expanded by 34MB 00:04:41.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was shrunk by 34MB 00:04:41.332 EAL: Trying to obtain current memory policy. 00:04:41.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.332 EAL: Restoring previous memory policy: 4 00:04:41.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was expanded by 66MB 00:04:41.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.332 EAL: request: mp_malloc_sync 00:04:41.332 EAL: No shared files mode enabled, IPC is disabled 00:04:41.332 EAL: Heap on socket 0 was shrunk by 66MB 00:04:41.602 EAL: Trying to obtain current memory policy. 00:04:41.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.602 EAL: Restoring previous memory policy: 4 00:04:41.602 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.602 EAL: request: mp_malloc_sync 00:04:41.602 EAL: No shared files mode enabled, IPC is disabled 00:04:41.602 EAL: Heap on socket 0 was expanded by 130MB 00:04:41.602 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.602 EAL: request: mp_malloc_sync 00:04:41.602 EAL: No shared files mode enabled, IPC is disabled 00:04:41.602 EAL: Heap on socket 0 was shrunk by 130MB 00:04:41.860 EAL: Trying to obtain current memory policy. 00:04:41.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.860 EAL: Restoring previous memory policy: 4 00:04:41.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.860 EAL: request: mp_malloc_sync 00:04:41.860 EAL: No shared files mode enabled, IPC is disabled 00:04:41.860 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.118 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.118 EAL: request: mp_malloc_sync 00:04:42.118 EAL: No shared files mode enabled, IPC is disabled 00:04:42.118 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.376 EAL: Trying to obtain current memory policy. 00:04:42.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.377 EAL: Restoring previous memory policy: 4 00:04:42.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.377 EAL: request: mp_malloc_sync 00:04:42.377 EAL: No shared files mode enabled, IPC is disabled 00:04:42.377 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.943 EAL: request: mp_malloc_sync 00:04:42.943 EAL: No shared files mode enabled, IPC is disabled 00:04:42.943 EAL: Heap on socket 0 was shrunk by 514MB 00:04:43.202 EAL: Trying to obtain current memory policy. 00:04:43.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.460 EAL: Restoring previous memory policy: 4 00:04:43.460 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.460 EAL: request: mp_malloc_sync 00:04:43.460 EAL: No shared files mode enabled, IPC is disabled 00:04:43.460 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.394 EAL: request: mp_malloc_sync 00:04:44.394 EAL: No shared files mode enabled, IPC is disabled 00:04:44.394 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.330 passed 00:04:45.330 00:04:45.330 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.330 suites 1 1 n/a 0 0 00:04:45.330 tests 2 2 2 0 0 00:04:45.330 asserts 5495 5495 5495 0 n/a 00:04:45.330 00:04:45.330 Elapsed time = 4.278 seconds 00:04:45.330 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.330 EAL: request: mp_malloc_sync 00:04:45.330 EAL: No shared files mode enabled, IPC is disabled 00:04:45.330 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.330 EAL: No shared files mode enabled, IPC is disabled 00:04:45.330 EAL: No shared files mode enabled, IPC is disabled 00:04:45.330 EAL: No shared files mode enabled, IPC is disabled 00:04:45.330 00:04:45.330 real 0m4.526s 00:04:45.330 user 0m3.782s 00:04:45.330 sys 0m0.607s 00:04:45.330 19:57:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.330 ************************************ 00:04:45.330 19:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.330 END TEST env_vtophys 00:04:45.330 ************************************ 00:04:45.330 19:57:52 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.330 19:57:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.330 19:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.330 19:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.330 ************************************ 00:04:45.330 START TEST env_pci 00:04:45.330 ************************************ 00:04:45.330 19:57:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.330 00:04:45.330 00:04:45.330 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.330 http://cunit.sourceforge.net/ 00:04:45.330 00:04:45.330 00:04:45.330 Suite: pci 00:04:45.331 Test: pci_hook ...[2024-12-16 19:57:52.826396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56032 has claimed it 00:04:45.331 passed 00:04:45.331 00:04:45.331 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.331 suites 1 1 n/a 0 0 00:04:45.331 tests 1 1 1 0 0 00:04:45.331 asserts 25 25 25 0 n/a 00:04:45.331 00:04:45.331 Elapsed time = 0.005 seconds 00:04:45.331 EAL: Cannot find device (10000:00:01.0) 00:04:45.331 EAL: Failed to attach device on primary process 00:04:45.331 00:04:45.331 real 0m0.061s 00:04:45.331 user 0m0.032s 00:04:45.331 sys 0m0.028s 00:04:45.331 19:57:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.331 19:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.331 ************************************ 00:04:45.331 END TEST env_pci 00:04:45.331 ************************************ 00:04:45.331 19:57:52 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.331 19:57:52 -- env/env.sh@15 -- # uname 00:04:45.331 19:57:52 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.331 19:57:52 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.331 19:57:52 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.331 19:57:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:45.331 19:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.331 19:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.331 ************************************ 00:04:45.331 START TEST env_dpdk_post_init 00:04:45.331 ************************************ 00:04:45.331 19:57:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.331 EAL: Detected CPU lcores: 10 00:04:45.331 EAL: Detected NUMA nodes: 1 00:04:45.331 EAL: Detected shared linkage of DPDK 00:04:45.331 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.331 EAL: Selected IOVA mode 'PA' 00:04:45.590 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.590 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:45.590 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:45.590 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:45.590 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:45.590 Starting DPDK initialization... 00:04:45.590 Starting SPDK post initialization... 00:04:45.590 SPDK NVMe probe 00:04:45.590 Attaching to 0000:00:06.0 00:04:45.590 Attaching to 0000:00:07.0 00:04:45.590 Attaching to 0000:00:08.0 00:04:45.590 Attaching to 0000:00:09.0 00:04:45.590 Attached to 0000:00:06.0 00:04:45.590 Attached to 0000:00:07.0 00:04:45.590 Attached to 0000:00:09.0 00:04:45.590 Attached to 0000:00:08.0 00:04:45.590 Cleaning up... 00:04:45.590 00:04:45.590 real 0m0.226s 00:04:45.590 user 0m0.063s 00:04:45.590 sys 0m0.064s 00:04:45.590 19:57:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.590 19:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.590 ************************************ 00:04:45.590 END TEST env_dpdk_post_init 00:04:45.590 ************************************ 00:04:45.590 19:57:53 -- env/env.sh@26 -- # uname 00:04:45.590 19:57:53 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:45.590 19:57:53 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.590 19:57:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.590 19:57:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.590 19:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.590 ************************************ 00:04:45.590 START TEST env_mem_callbacks 00:04:45.590 ************************************ 00:04:45.590 19:57:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.590 EAL: Detected CPU lcores: 10 00:04:45.590 EAL: Detected NUMA nodes: 1 00:04:45.590 EAL: Detected shared linkage of DPDK 00:04:45.590 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.590 EAL: Selected IOVA mode 'PA' 00:04:45.848 00:04:45.848 00:04:45.848 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.848 http://cunit.sourceforge.net/ 00:04:45.848 00:04:45.848 00:04:45.848 Suite: memory 00:04:45.848 Test: test ... 00:04:45.848 register 0x200000200000 2097152 00:04:45.848 malloc 3145728 00:04:45.848 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.848 register 0x200000400000 4194304 00:04:45.848 buf 0x2000004fffc0 len 3145728 PASSED 00:04:45.848 malloc 64 00:04:45.848 buf 0x2000004ffec0 len 64 PASSED 00:04:45.848 malloc 4194304 00:04:45.848 register 0x200000800000 6291456 00:04:45.848 buf 0x2000009fffc0 len 4194304 PASSED 00:04:45.848 free 0x2000004fffc0 3145728 00:04:45.848 free 0x2000004ffec0 64 00:04:45.848 unregister 0x200000400000 4194304 PASSED 00:04:45.848 free 0x2000009fffc0 4194304 00:04:45.848 unregister 0x200000800000 6291456 PASSED 00:04:45.848 malloc 8388608 00:04:45.848 register 0x200000400000 10485760 00:04:45.848 buf 0x2000005fffc0 len 8388608 PASSED 00:04:45.848 free 0x2000005fffc0 8388608 00:04:45.848 unregister 0x200000400000 10485760 PASSED 00:04:45.848 passed 00:04:45.848 00:04:45.848 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.848 suites 1 1 n/a 0 0 00:04:45.848 tests 1 1 1 0 0 00:04:45.848 asserts 15 15 15 0 n/a 00:04:45.848 00:04:45.848 Elapsed time = 0.043 seconds 00:04:45.848 00:04:45.848 real 0m0.205s 00:04:45.848 user 0m0.062s 00:04:45.848 sys 0m0.042s 00:04:45.848 19:57:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.848 19:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.848 ************************************ 00:04:45.848 END TEST env_mem_callbacks 00:04:45.848 ************************************ 00:04:45.848 00:04:45.848 real 0m5.632s 00:04:45.848 user 0m4.324s 00:04:45.848 sys 0m0.952s 00:04:45.848 19:57:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.848 19:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.848 ************************************ 00:04:45.848 END TEST env 00:04:45.848 ************************************ 00:04:45.848 19:57:53 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.848 19:57:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.848 19:57:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.848 19:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.848 ************************************ 00:04:45.848 START TEST rpc 00:04:45.848 ************************************ 00:04:45.848 19:57:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:46.108 * Looking for test storage... 00:04:46.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.108 19:57:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.108 19:57:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.108 19:57:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.108 19:57:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.108 19:57:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.108 19:57:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.108 19:57:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.108 19:57:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.108 19:57:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.108 19:57:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.108 19:57:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.108 19:57:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.108 19:57:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.108 19:57:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.108 19:57:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.108 19:57:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.108 19:57:53 -- scripts/common.sh@344 -- # : 1 00:04:46.108 19:57:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.108 19:57:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.108 19:57:53 -- scripts/common.sh@364 -- # decimal 1 00:04:46.108 19:57:53 -- scripts/common.sh@352 -- # local d=1 00:04:46.108 19:57:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.108 19:57:53 -- scripts/common.sh@354 -- # echo 1 00:04:46.108 19:57:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.108 19:57:53 -- scripts/common.sh@365 -- # decimal 2 00:04:46.108 19:57:53 -- scripts/common.sh@352 -- # local d=2 00:04:46.108 19:57:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.108 19:57:53 -- scripts/common.sh@354 -- # echo 2 00:04:46.108 19:57:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.108 19:57:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.108 19:57:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.108 19:57:53 -- scripts/common.sh@367 -- # return 0 00:04:46.108 19:57:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.108 19:57:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.108 --rc genhtml_branch_coverage=1 00:04:46.108 --rc genhtml_function_coverage=1 00:04:46.108 --rc genhtml_legend=1 00:04:46.108 --rc geninfo_all_blocks=1 00:04:46.108 --rc geninfo_unexecuted_blocks=1 00:04:46.108 00:04:46.108 ' 00:04:46.108 19:57:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.108 --rc genhtml_branch_coverage=1 00:04:46.108 --rc genhtml_function_coverage=1 00:04:46.108 --rc genhtml_legend=1 00:04:46.108 --rc geninfo_all_blocks=1 00:04:46.108 --rc geninfo_unexecuted_blocks=1 00:04:46.108 00:04:46.108 ' 00:04:46.108 19:57:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.108 --rc genhtml_branch_coverage=1 00:04:46.108 --rc genhtml_function_coverage=1 00:04:46.108 --rc genhtml_legend=1 00:04:46.108 --rc geninfo_all_blocks=1 00:04:46.108 --rc geninfo_unexecuted_blocks=1 00:04:46.108 00:04:46.108 ' 00:04:46.108 19:57:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.108 --rc genhtml_branch_coverage=1 00:04:46.108 --rc genhtml_function_coverage=1 00:04:46.108 --rc genhtml_legend=1 00:04:46.108 --rc geninfo_all_blocks=1 00:04:46.108 --rc geninfo_unexecuted_blocks=1 00:04:46.108 00:04:46.108 ' 00:04:46.108 19:57:53 -- rpc/rpc.sh@65 -- # spdk_pid=56154 00:04:46.108 19:57:53 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.108 19:57:53 -- rpc/rpc.sh@67 -- # waitforlisten 56154 00:04:46.108 19:57:53 -- common/autotest_common.sh@829 -- # '[' -z 56154 ']' 00:04:46.108 19:57:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.108 19:57:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.108 19:57:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.108 19:57:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.108 19:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.108 19:57:53 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:46.108 [2024-12-16 19:57:53.658091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.108 [2024-12-16 19:57:53.658192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56154 ] 00:04:46.367 [2024-12-16 19:57:53.816531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.367 [2024-12-16 19:57:53.991826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.367 [2024-12-16 19:57:53.992038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:46.367 [2024-12-16 19:57:53.992054] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56154' to capture a snapshot of events at runtime. 00:04:46.367 [2024-12-16 19:57:53.992065] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56154 for offline analysis/debug. 00:04:46.367 [2024-12-16 19:57:53.992095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.742 19:57:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.742 19:57:55 -- common/autotest_common.sh@862 -- # return 0 00:04:47.742 19:57:55 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.742 19:57:55 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.742 19:57:55 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:47.742 19:57:55 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:47.742 19:57:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.742 19:57:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.742 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.742 ************************************ 00:04:47.742 START TEST rpc_integrity 00:04:47.742 ************************************ 00:04:47.742 19:57:55 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:47.742 19:57:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.742 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.742 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.742 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.742 19:57:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.742 19:57:55 -- rpc/rpc.sh@13 -- # jq length 00:04:47.742 19:57:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.742 19:57:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.742 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.742 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.742 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.742 19:57:55 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:47.742 19:57:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.742 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.742 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.742 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.742 19:57:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.742 { 00:04:47.742 "name": "Malloc0", 00:04:47.742 "aliases": [ 00:04:47.742 "8bb58efa-70a8-45a0-a47a-e9c02efbe23b" 00:04:47.742 ], 00:04:47.742 "product_name": "Malloc disk", 00:04:47.742 "block_size": 512, 00:04:47.742 "num_blocks": 16384, 00:04:47.742 "uuid": "8bb58efa-70a8-45a0-a47a-e9c02efbe23b", 00:04:47.742 "assigned_rate_limits": { 00:04:47.742 "rw_ios_per_sec": 0, 00:04:47.742 "rw_mbytes_per_sec": 0, 00:04:47.742 "r_mbytes_per_sec": 0, 00:04:47.742 "w_mbytes_per_sec": 0 00:04:47.742 }, 00:04:47.742 "claimed": false, 00:04:47.742 "zoned": false, 00:04:47.742 "supported_io_types": { 00:04:47.742 "read": true, 00:04:47.742 "write": true, 00:04:47.742 "unmap": true, 00:04:47.742 "write_zeroes": true, 00:04:47.742 "flush": true, 00:04:47.742 "reset": true, 00:04:47.742 "compare": false, 00:04:47.742 "compare_and_write": false, 00:04:47.742 "abort": true, 00:04:47.742 "nvme_admin": false, 00:04:47.742 "nvme_io": false 00:04:47.742 }, 00:04:47.742 "memory_domains": [ 00:04:47.742 { 00:04:47.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.742 "dma_device_type": 2 00:04:47.742 } 00:04:47.742 ], 00:04:47.742 "driver_specific": {} 00:04:47.742 } 00:04:47.742 ]' 00:04:47.742 19:57:55 -- rpc/rpc.sh@17 -- # jq length 00:04:47.742 19:57:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.742 19:57:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:47.742 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.742 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.742 [2024-12-16 19:57:55.279943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:47.742 [2024-12-16 19:57:55.280010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.742 [2024-12-16 19:57:55.280031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:47.742 [2024-12-16 19:57:55.280048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.742 [2024-12-16 19:57:55.282234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.742 [2024-12-16 19:57:55.282274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.742 Passthru0 00:04:47.742 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.742 19:57:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.742 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.742 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.742 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.742 19:57:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.742 { 00:04:47.742 "name": "Malloc0", 00:04:47.742 "aliases": [ 00:04:47.742 "8bb58efa-70a8-45a0-a47a-e9c02efbe23b" 00:04:47.742 ], 00:04:47.742 "product_name": "Malloc disk", 00:04:47.742 "block_size": 512, 00:04:47.742 "num_blocks": 16384, 00:04:47.742 "uuid": "8bb58efa-70a8-45a0-a47a-e9c02efbe23b", 00:04:47.742 "assigned_rate_limits": { 00:04:47.742 "rw_ios_per_sec": 0, 00:04:47.742 "rw_mbytes_per_sec": 0, 00:04:47.742 "r_mbytes_per_sec": 0, 00:04:47.742 "w_mbytes_per_sec": 0 00:04:47.742 }, 00:04:47.742 "claimed": true, 00:04:47.742 "claim_type": "exclusive_write", 00:04:47.742 "zoned": false, 00:04:47.742 "supported_io_types": { 00:04:47.742 "read": true, 00:04:47.742 "write": true, 00:04:47.742 "unmap": true, 00:04:47.742 "write_zeroes": true, 00:04:47.742 "flush": true, 00:04:47.742 "reset": true, 00:04:47.742 "compare": false, 00:04:47.742 "compare_and_write": false, 00:04:47.742 "abort": true, 00:04:47.742 "nvme_admin": false, 00:04:47.742 "nvme_io": false 00:04:47.742 }, 00:04:47.742 "memory_domains": [ 00:04:47.742 { 00:04:47.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.742 "dma_device_type": 2 00:04:47.742 } 00:04:47.742 ], 00:04:47.742 "driver_specific": {} 00:04:47.742 }, 00:04:47.742 { 00:04:47.742 "name": "Passthru0", 00:04:47.742 "aliases": [ 00:04:47.742 "ecbee1f4-d677-58f3-b52a-1b04961b8381" 00:04:47.742 ], 00:04:47.742 "product_name": "passthru", 00:04:47.742 "block_size": 512, 00:04:47.742 "num_blocks": 16384, 00:04:47.742 "uuid": "ecbee1f4-d677-58f3-b52a-1b04961b8381", 00:04:47.742 "assigned_rate_limits": { 00:04:47.742 "rw_ios_per_sec": 0, 00:04:47.742 "rw_mbytes_per_sec": 0, 00:04:47.742 "r_mbytes_per_sec": 0, 00:04:47.742 "w_mbytes_per_sec": 0 00:04:47.742 }, 00:04:47.742 "claimed": false, 00:04:47.742 "zoned": false, 00:04:47.743 "supported_io_types": { 00:04:47.743 "read": true, 00:04:47.743 "write": true, 00:04:47.743 "unmap": true, 00:04:47.743 "write_zeroes": true, 00:04:47.743 "flush": true, 00:04:47.743 "reset": true, 00:04:47.743 "compare": false, 00:04:47.743 "compare_and_write": false, 00:04:47.743 "abort": true, 00:04:47.743 "nvme_admin": false, 00:04:47.743 "nvme_io": false 00:04:47.743 }, 00:04:47.743 "memory_domains": [ 00:04:47.743 { 00:04:47.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.743 "dma_device_type": 2 00:04:47.743 } 00:04:47.743 ], 00:04:47.743 "driver_specific": { 00:04:47.743 "passthru": { 00:04:47.743 "name": "Passthru0", 00:04:47.743 "base_bdev_name": "Malloc0" 00:04:47.743 } 00:04:47.743 } 00:04:47.743 } 00:04:47.743 ]' 00:04:47.743 19:57:55 -- rpc/rpc.sh@21 -- # jq length 00:04:47.743 19:57:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.743 19:57:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.743 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.743 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.743 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.743 19:57:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:47.743 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.743 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.743 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.743 19:57:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.743 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.743 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.743 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.743 19:57:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.743 19:57:55 -- rpc/rpc.sh@26 -- # jq length 00:04:48.001 19:57:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.001 00:04:48.001 real 0m0.246s 00:04:48.001 user 0m0.113s 00:04:48.001 sys 0m0.035s 00:04:48.001 19:57:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.001 ************************************ 00:04:48.001 END TEST rpc_integrity 00:04:48.001 ************************************ 00:04:48.001 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.001 19:57:55 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:48.001 19:57:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.001 19:57:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.001 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.001 ************************************ 00:04:48.001 START TEST rpc_plugins 00:04:48.001 ************************************ 00:04:48.001 19:57:55 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:48.001 19:57:55 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:48.001 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.001 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.001 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.001 19:57:55 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:48.002 19:57:55 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:48.002 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.002 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.002 19:57:55 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:48.002 { 00:04:48.002 "name": "Malloc1", 00:04:48.002 "aliases": [ 00:04:48.002 "b19c5e59-afc8-42db-bf98-93383c1128c3" 00:04:48.002 ], 00:04:48.002 "product_name": "Malloc disk", 00:04:48.002 "block_size": 4096, 00:04:48.002 "num_blocks": 256, 00:04:48.002 "uuid": "b19c5e59-afc8-42db-bf98-93383c1128c3", 00:04:48.002 "assigned_rate_limits": { 00:04:48.002 "rw_ios_per_sec": 0, 00:04:48.002 "rw_mbytes_per_sec": 0, 00:04:48.002 "r_mbytes_per_sec": 0, 00:04:48.002 "w_mbytes_per_sec": 0 00:04:48.002 }, 00:04:48.002 "claimed": false, 00:04:48.002 "zoned": false, 00:04:48.002 "supported_io_types": { 00:04:48.002 "read": true, 00:04:48.002 "write": true, 00:04:48.002 "unmap": true, 00:04:48.002 "write_zeroes": true, 00:04:48.002 "flush": true, 00:04:48.002 "reset": true, 00:04:48.002 "compare": false, 00:04:48.002 "compare_and_write": false, 00:04:48.002 "abort": true, 00:04:48.002 "nvme_admin": false, 00:04:48.002 "nvme_io": false 00:04:48.002 }, 00:04:48.002 "memory_domains": [ 00:04:48.002 { 00:04:48.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.002 "dma_device_type": 2 00:04:48.002 } 00:04:48.002 ], 00:04:48.002 "driver_specific": {} 00:04:48.002 } 00:04:48.002 ]' 00:04:48.002 19:57:55 -- rpc/rpc.sh@32 -- # jq length 00:04:48.002 19:57:55 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:48.002 19:57:55 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:48.002 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.002 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.002 19:57:55 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:48.002 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.002 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.002 19:57:55 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:48.002 19:57:55 -- rpc/rpc.sh@36 -- # jq length 00:04:48.002 19:57:55 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:48.002 00:04:48.002 real 0m0.105s 00:04:48.002 user 0m0.054s 00:04:48.002 sys 0m0.016s 00:04:48.002 19:57:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.002 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 ************************************ 00:04:48.002 END TEST rpc_plugins 00:04:48.002 ************************************ 00:04:48.002 19:57:55 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:48.002 19:57:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.002 19:57:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.002 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 ************************************ 00:04:48.002 START TEST rpc_trace_cmd_test 00:04:48.002 ************************************ 00:04:48.002 19:57:55 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:48.002 19:57:55 -- rpc/rpc.sh@40 -- # local info 00:04:48.002 19:57:55 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:48.002 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.002 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.002 19:57:55 -- rpc/rpc.sh@42 -- # info='{ 00:04:48.002 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56154", 00:04:48.002 "tpoint_group_mask": "0x8", 00:04:48.002 "iscsi_conn": { 00:04:48.002 "mask": "0x2", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "scsi": { 00:04:48.002 "mask": "0x4", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "bdev": { 00:04:48.002 "mask": "0x8", 00:04:48.002 "tpoint_mask": "0xffffffffffffffff" 00:04:48.002 }, 00:04:48.002 "nvmf_rdma": { 00:04:48.002 "mask": "0x10", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "nvmf_tcp": { 00:04:48.002 "mask": "0x20", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "ftl": { 00:04:48.002 "mask": "0x40", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "blobfs": { 00:04:48.002 "mask": "0x80", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "dsa": { 00:04:48.002 "mask": "0x200", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "thread": { 00:04:48.002 "mask": "0x400", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "nvme_pcie": { 00:04:48.002 "mask": "0x800", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "iaa": { 00:04:48.002 "mask": "0x1000", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "nvme_tcp": { 00:04:48.002 "mask": "0x2000", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 }, 00:04:48.002 "bdev_nvme": { 00:04:48.002 "mask": "0x4000", 00:04:48.002 "tpoint_mask": "0x0" 00:04:48.002 } 00:04:48.002 }' 00:04:48.002 19:57:55 -- rpc/rpc.sh@43 -- # jq length 00:04:48.261 19:57:55 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:48.261 19:57:55 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:48.261 19:57:55 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:48.261 19:57:55 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:48.261 19:57:55 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:48.261 19:57:55 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:48.261 19:57:55 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:48.261 19:57:55 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:48.261 19:57:55 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:48.261 00:04:48.261 real 0m0.161s 00:04:48.261 user 0m0.132s 00:04:48.261 sys 0m0.019s 00:04:48.261 19:57:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.261 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 ************************************ 00:04:48.261 END TEST rpc_trace_cmd_test 00:04:48.261 ************************************ 00:04:48.261 19:57:55 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:48.261 19:57:55 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:48.261 19:57:55 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:48.261 19:57:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.261 19:57:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.261 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 ************************************ 00:04:48.261 START TEST rpc_daemon_integrity 00:04:48.261 ************************************ 00:04:48.261 19:57:55 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:48.261 19:57:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.261 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.261 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.261 19:57:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.261 19:57:55 -- rpc/rpc.sh@13 -- # jq length 00:04:48.261 19:57:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.261 19:57:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.261 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.261 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.261 19:57:55 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:48.261 19:57:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.261 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.261 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.261 19:57:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.261 { 00:04:48.261 "name": "Malloc2", 00:04:48.261 "aliases": [ 00:04:48.261 "7e393e2e-61d8-45b3-a0f4-2b6bcfe1f971" 00:04:48.261 ], 00:04:48.261 "product_name": "Malloc disk", 00:04:48.261 "block_size": 512, 00:04:48.261 "num_blocks": 16384, 00:04:48.261 "uuid": "7e393e2e-61d8-45b3-a0f4-2b6bcfe1f971", 00:04:48.261 "assigned_rate_limits": { 00:04:48.261 "rw_ios_per_sec": 0, 00:04:48.261 "rw_mbytes_per_sec": 0, 00:04:48.261 "r_mbytes_per_sec": 0, 00:04:48.261 "w_mbytes_per_sec": 0 00:04:48.261 }, 00:04:48.261 "claimed": false, 00:04:48.261 "zoned": false, 00:04:48.261 "supported_io_types": { 00:04:48.261 "read": true, 00:04:48.261 "write": true, 00:04:48.261 "unmap": true, 00:04:48.261 "write_zeroes": true, 00:04:48.261 "flush": true, 00:04:48.261 "reset": true, 00:04:48.261 "compare": false, 00:04:48.261 "compare_and_write": false, 00:04:48.261 "abort": true, 00:04:48.261 "nvme_admin": false, 00:04:48.261 "nvme_io": false 00:04:48.261 }, 00:04:48.261 "memory_domains": [ 00:04:48.261 { 00:04:48.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.261 "dma_device_type": 2 00:04:48.261 } 00:04:48.261 ], 00:04:48.261 "driver_specific": {} 00:04:48.261 } 00:04:48.261 ]' 00:04:48.261 19:57:55 -- rpc/rpc.sh@17 -- # jq length 00:04:48.520 19:57:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.520 19:57:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:48.520 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.520 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.520 [2024-12-16 19:57:55.915166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:48.520 [2024-12-16 19:57:55.915220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.520 [2024-12-16 19:57:55.915241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:48.520 [2024-12-16 19:57:55.915252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.520 [2024-12-16 19:57:55.917352] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.520 [2024-12-16 19:57:55.917387] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.520 Passthru0 00:04:48.520 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.520 19:57:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.520 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.520 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.520 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.520 19:57:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.520 { 00:04:48.520 "name": "Malloc2", 00:04:48.520 "aliases": [ 00:04:48.520 "7e393e2e-61d8-45b3-a0f4-2b6bcfe1f971" 00:04:48.520 ], 00:04:48.520 "product_name": "Malloc disk", 00:04:48.520 "block_size": 512, 00:04:48.520 "num_blocks": 16384, 00:04:48.520 "uuid": "7e393e2e-61d8-45b3-a0f4-2b6bcfe1f971", 00:04:48.520 "assigned_rate_limits": { 00:04:48.520 "rw_ios_per_sec": 0, 00:04:48.520 "rw_mbytes_per_sec": 0, 00:04:48.520 "r_mbytes_per_sec": 0, 00:04:48.520 "w_mbytes_per_sec": 0 00:04:48.520 }, 00:04:48.520 "claimed": true, 00:04:48.520 "claim_type": "exclusive_write", 00:04:48.520 "zoned": false, 00:04:48.520 "supported_io_types": { 00:04:48.520 "read": true, 00:04:48.520 "write": true, 00:04:48.520 "unmap": true, 00:04:48.520 "write_zeroes": true, 00:04:48.520 "flush": true, 00:04:48.520 "reset": true, 00:04:48.520 "compare": false, 00:04:48.520 "compare_and_write": false, 00:04:48.520 "abort": true, 00:04:48.520 "nvme_admin": false, 00:04:48.520 "nvme_io": false 00:04:48.520 }, 00:04:48.520 "memory_domains": [ 00:04:48.520 { 00:04:48.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.520 "dma_device_type": 2 00:04:48.520 } 00:04:48.520 ], 00:04:48.520 "driver_specific": {} 00:04:48.520 }, 00:04:48.520 { 00:04:48.520 "name": "Passthru0", 00:04:48.520 "aliases": [ 00:04:48.520 "6a729aa7-6aa8-5e28-83f0-fc0bfc46f414" 00:04:48.520 ], 00:04:48.520 "product_name": "passthru", 00:04:48.520 "block_size": 512, 00:04:48.520 "num_blocks": 16384, 00:04:48.520 "uuid": "6a729aa7-6aa8-5e28-83f0-fc0bfc46f414", 00:04:48.520 "assigned_rate_limits": { 00:04:48.520 "rw_ios_per_sec": 0, 00:04:48.520 "rw_mbytes_per_sec": 0, 00:04:48.520 "r_mbytes_per_sec": 0, 00:04:48.520 "w_mbytes_per_sec": 0 00:04:48.520 }, 00:04:48.520 "claimed": false, 00:04:48.520 "zoned": false, 00:04:48.520 "supported_io_types": { 00:04:48.520 "read": true, 00:04:48.520 "write": true, 00:04:48.520 "unmap": true, 00:04:48.520 "write_zeroes": true, 00:04:48.520 "flush": true, 00:04:48.520 "reset": true, 00:04:48.520 "compare": false, 00:04:48.520 "compare_and_write": false, 00:04:48.520 "abort": true, 00:04:48.520 "nvme_admin": false, 00:04:48.520 "nvme_io": false 00:04:48.520 }, 00:04:48.520 "memory_domains": [ 00:04:48.520 { 00:04:48.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.520 "dma_device_type": 2 00:04:48.520 } 00:04:48.520 ], 00:04:48.520 "driver_specific": { 00:04:48.520 "passthru": { 00:04:48.520 "name": "Passthru0", 00:04:48.520 "base_bdev_name": "Malloc2" 00:04:48.520 } 00:04:48.520 } 00:04:48.520 } 00:04:48.520 ]' 00:04:48.520 19:57:55 -- rpc/rpc.sh@21 -- # jq length 00:04:48.520 19:57:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.520 19:57:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.520 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.520 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.520 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.520 19:57:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:48.520 19:57:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.520 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.520 19:57:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.520 19:57:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.520 19:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.520 19:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:48.520 19:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.520 19:57:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.521 19:57:56 -- rpc/rpc.sh@26 -- # jq length 00:04:48.521 19:57:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.521 00:04:48.521 real 0m0.237s 00:04:48.521 user 0m0.124s 00:04:48.521 sys 0m0.033s 00:04:48.521 19:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.521 ************************************ 00:04:48.521 END TEST rpc_daemon_integrity 00:04:48.521 ************************************ 00:04:48.521 19:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:48.521 19:57:56 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:48.521 19:57:56 -- rpc/rpc.sh@84 -- # killprocess 56154 00:04:48.521 19:57:56 -- common/autotest_common.sh@936 -- # '[' -z 56154 ']' 00:04:48.521 19:57:56 -- common/autotest_common.sh@940 -- # kill -0 56154 00:04:48.521 19:57:56 -- common/autotest_common.sh@941 -- # uname 00:04:48.521 19:57:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.521 19:57:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56154 00:04:48.521 19:57:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.521 19:57:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.521 killing process with pid 56154 00:04:48.521 19:57:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56154' 00:04:48.521 19:57:56 -- common/autotest_common.sh@955 -- # kill 56154 00:04:48.521 19:57:56 -- common/autotest_common.sh@960 -- # wait 56154 00:04:49.898 00:04:49.898 real 0m3.896s 00:04:49.898 user 0m4.426s 00:04:49.898 sys 0m0.601s 00:04:49.898 19:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.898 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.898 ************************************ 00:04:49.898 END TEST rpc 00:04:49.898 ************************************ 00:04:49.898 19:57:57 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.898 19:57:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.898 19:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.898 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.898 ************************************ 00:04:49.898 START TEST rpc_client 00:04:49.898 ************************************ 00:04:49.898 19:57:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.898 * Looking for test storage... 00:04:49.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:49.898 19:57:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.898 19:57:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.898 19:57:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.898 19:57:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.898 19:57:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.898 19:57:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.898 19:57:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.898 19:57:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.898 19:57:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.898 19:57:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.898 19:57:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.898 19:57:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.898 19:57:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.898 19:57:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.898 19:57:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.898 19:57:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.898 19:57:57 -- scripts/common.sh@344 -- # : 1 00:04:49.898 19:57:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.898 19:57:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.898 19:57:57 -- scripts/common.sh@364 -- # decimal 1 00:04:49.898 19:57:57 -- scripts/common.sh@352 -- # local d=1 00:04:49.898 19:57:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.898 19:57:57 -- scripts/common.sh@354 -- # echo 1 00:04:49.898 19:57:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.898 19:57:57 -- scripts/common.sh@365 -- # decimal 2 00:04:49.898 19:57:57 -- scripts/common.sh@352 -- # local d=2 00:04:49.898 19:57:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.898 19:57:57 -- scripts/common.sh@354 -- # echo 2 00:04:49.898 19:57:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.898 19:57:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.898 19:57:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.898 19:57:57 -- scripts/common.sh@367 -- # return 0 00:04:49.898 19:57:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.898 19:57:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.898 --rc genhtml_branch_coverage=1 00:04:49.898 --rc genhtml_function_coverage=1 00:04:49.898 --rc genhtml_legend=1 00:04:49.898 --rc geninfo_all_blocks=1 00:04:49.898 --rc geninfo_unexecuted_blocks=1 00:04:49.898 00:04:49.898 ' 00:04:49.898 19:57:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.898 --rc genhtml_branch_coverage=1 00:04:49.898 --rc genhtml_function_coverage=1 00:04:49.899 --rc genhtml_legend=1 00:04:49.899 --rc geninfo_all_blocks=1 00:04:49.899 --rc geninfo_unexecuted_blocks=1 00:04:49.899 00:04:49.899 ' 00:04:49.899 19:57:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.899 --rc genhtml_branch_coverage=1 00:04:49.899 --rc genhtml_function_coverage=1 00:04:49.899 --rc genhtml_legend=1 00:04:49.899 --rc geninfo_all_blocks=1 00:04:49.899 --rc geninfo_unexecuted_blocks=1 00:04:49.899 00:04:49.899 ' 00:04:49.899 19:57:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.899 --rc genhtml_branch_coverage=1 00:04:49.899 --rc genhtml_function_coverage=1 00:04:49.899 --rc genhtml_legend=1 00:04:49.899 --rc geninfo_all_blocks=1 00:04:49.899 --rc geninfo_unexecuted_blocks=1 00:04:49.899 00:04:49.899 ' 00:04:49.899 19:57:57 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.160 OK 00:04:50.160 19:57:57 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.160 00:04:50.160 real 0m0.179s 00:04:50.160 user 0m0.112s 00:04:50.160 sys 0m0.074s 00:04:50.160 19:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.160 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.160 ************************************ 00:04:50.160 END TEST rpc_client 00:04:50.160 ************************************ 00:04:50.160 19:57:57 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.160 19:57:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.160 19:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.160 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.160 ************************************ 00:04:50.160 START TEST json_config 00:04:50.160 ************************************ 00:04:50.160 19:57:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.160 19:57:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.160 19:57:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.160 19:57:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.160 19:57:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.160 19:57:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.160 19:57:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.160 19:57:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.160 19:57:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.160 19:57:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.160 19:57:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.160 19:57:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.160 19:57:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.160 19:57:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.160 19:57:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.160 19:57:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.160 19:57:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.160 19:57:57 -- scripts/common.sh@344 -- # : 1 00:04:50.160 19:57:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.160 19:57:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.160 19:57:57 -- scripts/common.sh@364 -- # decimal 1 00:04:50.160 19:57:57 -- scripts/common.sh@352 -- # local d=1 00:04:50.160 19:57:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.160 19:57:57 -- scripts/common.sh@354 -- # echo 1 00:04:50.160 19:57:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.160 19:57:57 -- scripts/common.sh@365 -- # decimal 2 00:04:50.160 19:57:57 -- scripts/common.sh@352 -- # local d=2 00:04:50.160 19:57:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.160 19:57:57 -- scripts/common.sh@354 -- # echo 2 00:04:50.160 19:57:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.160 19:57:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.160 19:57:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.160 19:57:57 -- scripts/common.sh@367 -- # return 0 00:04:50.160 19:57:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.160 19:57:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.160 --rc genhtml_branch_coverage=1 00:04:50.160 --rc genhtml_function_coverage=1 00:04:50.160 --rc genhtml_legend=1 00:04:50.160 --rc geninfo_all_blocks=1 00:04:50.160 --rc geninfo_unexecuted_blocks=1 00:04:50.160 00:04:50.160 ' 00:04:50.160 19:57:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.160 --rc genhtml_branch_coverage=1 00:04:50.160 --rc genhtml_function_coverage=1 00:04:50.160 --rc genhtml_legend=1 00:04:50.160 --rc geninfo_all_blocks=1 00:04:50.160 --rc geninfo_unexecuted_blocks=1 00:04:50.160 00:04:50.160 ' 00:04:50.160 19:57:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.160 --rc genhtml_branch_coverage=1 00:04:50.160 --rc genhtml_function_coverage=1 00:04:50.160 --rc genhtml_legend=1 00:04:50.160 --rc geninfo_all_blocks=1 00:04:50.160 --rc geninfo_unexecuted_blocks=1 00:04:50.160 00:04:50.160 ' 00:04:50.160 19:57:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.160 --rc genhtml_branch_coverage=1 00:04:50.160 --rc genhtml_function_coverage=1 00:04:50.160 --rc genhtml_legend=1 00:04:50.160 --rc geninfo_all_blocks=1 00:04:50.160 --rc geninfo_unexecuted_blocks=1 00:04:50.160 00:04:50.160 ' 00:04:50.160 19:57:57 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.160 19:57:57 -- nvmf/common.sh@7 -- # uname -s 00:04:50.160 19:57:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.160 19:57:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.160 19:57:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.160 19:57:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.160 19:57:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.160 19:57:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.160 19:57:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.160 19:57:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.160 19:57:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.160 19:57:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.160 19:57:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab2fd980-8183-46c1-a9af-566bb5c57102 00:04:50.160 19:57:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=ab2fd980-8183-46c1-a9af-566bb5c57102 00:04:50.160 19:57:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.160 19:57:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.160 19:57:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.160 19:57:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.160 19:57:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.160 19:57:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.160 19:57:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.160 19:57:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.160 19:57:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.160 19:57:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.160 19:57:57 -- paths/export.sh@5 -- # export PATH 00:04:50.160 19:57:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.160 19:57:57 -- nvmf/common.sh@46 -- # : 0 00:04:50.160 19:57:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:50.160 19:57:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:50.160 19:57:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:50.160 19:57:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.160 19:57:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.160 19:57:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:50.160 19:57:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:50.160 19:57:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:50.160 19:57:57 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:50.160 19:57:57 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:50.160 19:57:57 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:50.161 19:57:57 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.161 19:57:57 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:50.161 WARNING: No tests are enabled so not running JSON configuration tests 00:04:50.161 19:57:57 -- json_config/json_config.sh@27 -- # exit 0 00:04:50.161 00:04:50.161 real 0m0.136s 00:04:50.161 user 0m0.084s 00:04:50.161 sys 0m0.049s 00:04:50.161 19:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.161 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.161 ************************************ 00:04:50.161 END TEST json_config 00:04:50.161 ************************************ 00:04:50.161 19:57:57 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.161 19:57:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.161 19:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.161 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.161 ************************************ 00:04:50.161 START TEST json_config_extra_key 00:04:50.161 ************************************ 00:04:50.161 19:57:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.423 19:57:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.423 19:57:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.423 19:57:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.423 19:57:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.423 19:57:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.423 19:57:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.423 19:57:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.423 19:57:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.423 19:57:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.423 19:57:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.423 19:57:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.423 19:57:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.423 19:57:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.423 19:57:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.423 19:57:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.423 19:57:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.423 19:57:57 -- scripts/common.sh@344 -- # : 1 00:04:50.423 19:57:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.423 19:57:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.423 19:57:57 -- scripts/common.sh@364 -- # decimal 1 00:04:50.423 19:57:57 -- scripts/common.sh@352 -- # local d=1 00:04:50.423 19:57:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.423 19:57:57 -- scripts/common.sh@354 -- # echo 1 00:04:50.423 19:57:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.423 19:57:57 -- scripts/common.sh@365 -- # decimal 2 00:04:50.423 19:57:57 -- scripts/common.sh@352 -- # local d=2 00:04:50.423 19:57:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.423 19:57:57 -- scripts/common.sh@354 -- # echo 2 00:04:50.423 19:57:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.423 19:57:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.423 19:57:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.423 19:57:57 -- scripts/common.sh@367 -- # return 0 00:04:50.423 19:57:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.423 19:57:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.423 --rc genhtml_branch_coverage=1 00:04:50.423 --rc genhtml_function_coverage=1 00:04:50.423 --rc genhtml_legend=1 00:04:50.423 --rc geninfo_all_blocks=1 00:04:50.423 --rc geninfo_unexecuted_blocks=1 00:04:50.423 00:04:50.423 ' 00:04:50.423 19:57:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.423 --rc genhtml_branch_coverage=1 00:04:50.423 --rc genhtml_function_coverage=1 00:04:50.423 --rc genhtml_legend=1 00:04:50.423 --rc geninfo_all_blocks=1 00:04:50.423 --rc geninfo_unexecuted_blocks=1 00:04:50.423 00:04:50.423 ' 00:04:50.423 19:57:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.423 --rc genhtml_branch_coverage=1 00:04:50.423 --rc genhtml_function_coverage=1 00:04:50.423 --rc genhtml_legend=1 00:04:50.423 --rc geninfo_all_blocks=1 00:04:50.423 --rc geninfo_unexecuted_blocks=1 00:04:50.423 00:04:50.423 ' 00:04:50.423 19:57:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.423 --rc genhtml_branch_coverage=1 00:04:50.423 --rc genhtml_function_coverage=1 00:04:50.423 --rc genhtml_legend=1 00:04:50.423 --rc geninfo_all_blocks=1 00:04:50.423 --rc geninfo_unexecuted_blocks=1 00:04:50.423 00:04:50.423 ' 00:04:50.423 19:57:57 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.423 19:57:57 -- nvmf/common.sh@7 -- # uname -s 00:04:50.423 19:57:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.423 19:57:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.423 19:57:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.423 19:57:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.423 19:57:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.423 19:57:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.423 19:57:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.423 19:57:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.423 19:57:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.423 19:57:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.423 19:57:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab2fd980-8183-46c1-a9af-566bb5c57102 00:04:50.423 19:57:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=ab2fd980-8183-46c1-a9af-566bb5c57102 00:04:50.423 19:57:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.423 19:57:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.423 19:57:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.423 19:57:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.423 19:57:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.423 19:57:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.423 19:57:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.423 19:57:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.423 19:57:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.423 19:57:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.423 19:57:57 -- paths/export.sh@5 -- # export PATH 00:04:50.424 19:57:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.424 19:57:57 -- nvmf/common.sh@46 -- # : 0 00:04:50.424 19:57:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:50.424 19:57:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:50.424 19:57:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:50.424 19:57:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.424 19:57:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.424 19:57:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:50.424 19:57:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:50.424 19:57:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.424 INFO: launching applications... 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56465 00:04:50.424 Waiting for target to run... 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56465 /var/tmp/spdk_tgt.sock 00:04:50.424 19:57:57 -- common/autotest_common.sh@829 -- # '[' -z 56465 ']' 00:04:50.424 19:57:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.424 19:57:57 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.424 19:57:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.424 19:57:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.424 19:57:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.424 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.424 [2024-12-16 19:57:57.966464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.424 [2024-12-16 19:57:57.966579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56465 ] 00:04:50.685 [2024-12-16 19:57:58.295717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.947 [2024-12-16 19:57:58.424903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.947 [2024-12-16 19:57:58.425058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.891 19:57:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.891 19:57:59 -- common/autotest_common.sh@862 -- # return 0 00:04:51.891 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:51.891 INFO: shutting down applications... 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56465 ]] 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56465 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:51.891 19:57:59 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:52.464 19:57:59 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:52.464 19:57:59 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:52.464 19:57:59 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:52.464 19:57:59 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:53.037 19:58:00 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:53.037 19:58:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:53.037 19:58:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:53.037 19:58:00 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:53.609 SPDK target shutdown done 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:53.609 Success 00:04:53.609 19:58:00 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:53.609 00:04:53.609 real 0m3.211s 00:04:53.609 user 0m3.050s 00:04:53.609 sys 0m0.397s 00:04:53.609 19:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.609 19:58:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.609 ************************************ 00:04:53.609 END TEST json_config_extra_key 00:04:53.610 ************************************ 00:04:53.610 19:58:01 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.610 19:58:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.610 19:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.610 19:58:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.610 ************************************ 00:04:53.610 START TEST alias_rpc 00:04:53.610 ************************************ 00:04:53.610 19:58:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.610 * Looking for test storage... 00:04:53.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.610 19:58:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.610 19:58:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.610 19:58:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.610 19:58:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.610 19:58:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.610 19:58:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.610 19:58:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.610 19:58:01 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.610 19:58:01 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.610 19:58:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.610 19:58:01 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.610 19:58:01 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.610 19:58:01 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.610 19:58:01 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.610 19:58:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.610 19:58:01 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.610 19:58:01 -- scripts/common.sh@344 -- # : 1 00:04:53.610 19:58:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.610 19:58:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.610 19:58:01 -- scripts/common.sh@364 -- # decimal 1 00:04:53.610 19:58:01 -- scripts/common.sh@352 -- # local d=1 00:04:53.610 19:58:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.610 19:58:01 -- scripts/common.sh@354 -- # echo 1 00:04:53.610 19:58:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.610 19:58:01 -- scripts/common.sh@365 -- # decimal 2 00:04:53.610 19:58:01 -- scripts/common.sh@352 -- # local d=2 00:04:53.610 19:58:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.610 19:58:01 -- scripts/common.sh@354 -- # echo 2 00:04:53.610 19:58:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.610 19:58:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.610 19:58:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.611 19:58:01 -- scripts/common.sh@367 -- # return 0 00:04:53.611 19:58:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.611 19:58:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.611 --rc genhtml_branch_coverage=1 00:04:53.611 --rc genhtml_function_coverage=1 00:04:53.611 --rc genhtml_legend=1 00:04:53.611 --rc geninfo_all_blocks=1 00:04:53.611 --rc geninfo_unexecuted_blocks=1 00:04:53.611 00:04:53.611 ' 00:04:53.611 19:58:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.611 --rc genhtml_branch_coverage=1 00:04:53.611 --rc genhtml_function_coverage=1 00:04:53.611 --rc genhtml_legend=1 00:04:53.611 --rc geninfo_all_blocks=1 00:04:53.611 --rc geninfo_unexecuted_blocks=1 00:04:53.611 00:04:53.611 ' 00:04:53.611 19:58:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.611 --rc genhtml_branch_coverage=1 00:04:53.611 --rc genhtml_function_coverage=1 00:04:53.611 --rc genhtml_legend=1 00:04:53.611 --rc geninfo_all_blocks=1 00:04:53.611 --rc geninfo_unexecuted_blocks=1 00:04:53.612 00:04:53.612 ' 00:04:53.612 19:58:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.612 --rc genhtml_branch_coverage=1 00:04:53.612 --rc genhtml_function_coverage=1 00:04:53.612 --rc genhtml_legend=1 00:04:53.612 --rc geninfo_all_blocks=1 00:04:53.612 --rc geninfo_unexecuted_blocks=1 00:04:53.612 00:04:53.612 ' 00:04:53.612 19:58:01 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.612 19:58:01 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56558 00:04:53.612 19:58:01 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.612 19:58:01 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56558 00:04:53.612 19:58:01 -- common/autotest_common.sh@829 -- # '[' -z 56558 ']' 00:04:53.612 19:58:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.612 19:58:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.612 19:58:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.612 19:58:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.612 19:58:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.612 [2024-12-16 19:58:01.235538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.614 [2024-12-16 19:58:01.236043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56558 ] 00:04:53.874 [2024-12-16 19:58:01.383292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.134 [2024-12-16 19:58:01.522465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.134 [2024-12-16 19:58:01.522638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.707 19:58:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.707 19:58:02 -- common/autotest_common.sh@862 -- # return 0 00:04:54.707 19:58:02 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:54.707 19:58:02 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56558 00:04:54.707 19:58:02 -- common/autotest_common.sh@936 -- # '[' -z 56558 ']' 00:04:54.707 19:58:02 -- common/autotest_common.sh@940 -- # kill -0 56558 00:04:54.707 19:58:02 -- common/autotest_common.sh@941 -- # uname 00:04:54.707 19:58:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:54.707 19:58:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56558 00:04:54.707 19:58:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:54.707 killing process with pid 56558 00:04:54.708 19:58:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:54.708 19:58:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56558' 00:04:54.708 19:58:02 -- common/autotest_common.sh@955 -- # kill 56558 00:04:54.708 19:58:02 -- common/autotest_common.sh@960 -- # wait 56558 00:04:56.613 00:04:56.613 real 0m2.754s 00:04:56.613 user 0m2.852s 00:04:56.613 sys 0m0.378s 00:04:56.613 19:58:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.613 19:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.613 ************************************ 00:04:56.613 END TEST alias_rpc 00:04:56.613 ************************************ 00:04:56.613 19:58:03 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:56.613 19:58:03 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.613 19:58:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.613 19:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.613 19:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.613 ************************************ 00:04:56.613 START TEST spdkcli_tcp 00:04:56.613 ************************************ 00:04:56.613 19:58:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.613 * Looking for test storage... 00:04:56.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.613 19:58:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.613 19:58:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.613 19:58:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.613 19:58:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.613 19:58:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.613 19:58:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.613 19:58:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.613 19:58:03 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.613 19:58:03 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.613 19:58:03 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.613 19:58:03 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.613 19:58:03 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.613 19:58:03 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.613 19:58:03 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.613 19:58:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.613 19:58:03 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.613 19:58:03 -- scripts/common.sh@344 -- # : 1 00:04:56.613 19:58:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.613 19:58:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.613 19:58:03 -- scripts/common.sh@364 -- # decimal 1 00:04:56.613 19:58:03 -- scripts/common.sh@352 -- # local d=1 00:04:56.613 19:58:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.613 19:58:03 -- scripts/common.sh@354 -- # echo 1 00:04:56.613 19:58:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.613 19:58:03 -- scripts/common.sh@365 -- # decimal 2 00:04:56.613 19:58:03 -- scripts/common.sh@352 -- # local d=2 00:04:56.613 19:58:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.613 19:58:03 -- scripts/common.sh@354 -- # echo 2 00:04:56.613 19:58:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.613 19:58:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.613 19:58:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.613 19:58:03 -- scripts/common.sh@367 -- # return 0 00:04:56.613 19:58:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.613 19:58:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.613 --rc genhtml_branch_coverage=1 00:04:56.613 --rc genhtml_function_coverage=1 00:04:56.613 --rc genhtml_legend=1 00:04:56.613 --rc geninfo_all_blocks=1 00:04:56.613 --rc geninfo_unexecuted_blocks=1 00:04:56.613 00:04:56.613 ' 00:04:56.613 19:58:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.613 --rc genhtml_branch_coverage=1 00:04:56.613 --rc genhtml_function_coverage=1 00:04:56.613 --rc genhtml_legend=1 00:04:56.613 --rc geninfo_all_blocks=1 00:04:56.613 --rc geninfo_unexecuted_blocks=1 00:04:56.613 00:04:56.613 ' 00:04:56.613 19:58:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.613 --rc genhtml_branch_coverage=1 00:04:56.613 --rc genhtml_function_coverage=1 00:04:56.613 --rc genhtml_legend=1 00:04:56.613 --rc geninfo_all_blocks=1 00:04:56.613 --rc geninfo_unexecuted_blocks=1 00:04:56.613 00:04:56.614 ' 00:04:56.614 19:58:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.614 --rc genhtml_branch_coverage=1 00:04:56.614 --rc genhtml_function_coverage=1 00:04:56.614 --rc genhtml_legend=1 00:04:56.614 --rc geninfo_all_blocks=1 00:04:56.614 --rc geninfo_unexecuted_blocks=1 00:04:56.614 00:04:56.614 ' 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.614 19:58:03 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.614 19:58:03 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.614 19:58:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.614 19:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56653 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@27 -- # waitforlisten 56653 00:04:56.614 19:58:03 -- common/autotest_common.sh@829 -- # '[' -z 56653 ']' 00:04:56.614 19:58:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.614 19:58:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.614 19:58:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.614 19:58:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.614 19:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.614 19:58:03 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.614 [2024-12-16 19:58:04.013573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.614 [2024-12-16 19:58:04.013696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56653 ] 00:04:56.614 [2024-12-16 19:58:04.165710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.872 [2024-12-16 19:58:04.377350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.872 [2024-12-16 19:58:04.377698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.872 [2024-12-16 19:58:04.377766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.291 19:58:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.291 19:58:05 -- common/autotest_common.sh@862 -- # return 0 00:04:58.291 19:58:05 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:58.291 19:58:05 -- spdkcli/tcp.sh@31 -- # socat_pid=56672 00:04:58.291 19:58:05 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:58.291 [ 00:04:58.291 "bdev_malloc_delete", 00:04:58.291 "bdev_malloc_create", 00:04:58.291 "bdev_null_resize", 00:04:58.291 "bdev_null_delete", 00:04:58.291 "bdev_null_create", 00:04:58.291 "bdev_nvme_cuse_unregister", 00:04:58.291 "bdev_nvme_cuse_register", 00:04:58.291 "bdev_opal_new_user", 00:04:58.291 "bdev_opal_set_lock_state", 00:04:58.291 "bdev_opal_delete", 00:04:58.291 "bdev_opal_get_info", 00:04:58.291 "bdev_opal_create", 00:04:58.291 "bdev_nvme_opal_revert", 00:04:58.291 "bdev_nvme_opal_init", 00:04:58.291 "bdev_nvme_send_cmd", 00:04:58.291 "bdev_nvme_get_path_iostat", 00:04:58.291 "bdev_nvme_get_mdns_discovery_info", 00:04:58.291 "bdev_nvme_stop_mdns_discovery", 00:04:58.291 "bdev_nvme_start_mdns_discovery", 00:04:58.291 "bdev_nvme_set_multipath_policy", 00:04:58.291 "bdev_nvme_set_preferred_path", 00:04:58.291 "bdev_nvme_get_io_paths", 00:04:58.291 "bdev_nvme_remove_error_injection", 00:04:58.291 "bdev_nvme_add_error_injection", 00:04:58.291 "bdev_nvme_get_discovery_info", 00:04:58.291 "bdev_nvme_stop_discovery", 00:04:58.291 "bdev_nvme_start_discovery", 00:04:58.291 "bdev_nvme_get_controller_health_info", 00:04:58.291 "bdev_nvme_disable_controller", 00:04:58.291 "bdev_nvme_enable_controller", 00:04:58.291 "bdev_nvme_reset_controller", 00:04:58.291 "bdev_nvme_get_transport_statistics", 00:04:58.291 "bdev_nvme_apply_firmware", 00:04:58.291 "bdev_nvme_detach_controller", 00:04:58.291 "bdev_nvme_get_controllers", 00:04:58.291 "bdev_nvme_attach_controller", 00:04:58.291 "bdev_nvme_set_hotplug", 00:04:58.291 "bdev_nvme_set_options", 00:04:58.291 "bdev_passthru_delete", 00:04:58.291 "bdev_passthru_create", 00:04:58.291 "bdev_lvol_grow_lvstore", 00:04:58.291 "bdev_lvol_get_lvols", 00:04:58.291 "bdev_lvol_get_lvstores", 00:04:58.291 "bdev_lvol_delete", 00:04:58.291 "bdev_lvol_set_read_only", 00:04:58.291 "bdev_lvol_resize", 00:04:58.291 "bdev_lvol_decouple_parent", 00:04:58.291 "bdev_lvol_inflate", 00:04:58.291 "bdev_lvol_rename", 00:04:58.291 "bdev_lvol_clone_bdev", 00:04:58.291 "bdev_lvol_clone", 00:04:58.291 "bdev_lvol_snapshot", 00:04:58.291 "bdev_lvol_create", 00:04:58.291 "bdev_lvol_delete_lvstore", 00:04:58.291 "bdev_lvol_rename_lvstore", 00:04:58.291 "bdev_lvol_create_lvstore", 00:04:58.291 "bdev_raid_set_options", 00:04:58.291 "bdev_raid_remove_base_bdev", 00:04:58.291 "bdev_raid_add_base_bdev", 00:04:58.291 "bdev_raid_delete", 00:04:58.291 "bdev_raid_create", 00:04:58.291 "bdev_raid_get_bdevs", 00:04:58.292 "bdev_error_inject_error", 00:04:58.292 "bdev_error_delete", 00:04:58.292 "bdev_error_create", 00:04:58.292 "bdev_split_delete", 00:04:58.292 "bdev_split_create", 00:04:58.292 "bdev_delay_delete", 00:04:58.292 "bdev_delay_create", 00:04:58.292 "bdev_delay_update_latency", 00:04:58.292 "bdev_zone_block_delete", 00:04:58.292 "bdev_zone_block_create", 00:04:58.292 "blobfs_create", 00:04:58.292 "blobfs_detect", 00:04:58.292 "blobfs_set_cache_size", 00:04:58.292 "bdev_xnvme_delete", 00:04:58.292 "bdev_xnvme_create", 00:04:58.292 "bdev_aio_delete", 00:04:58.292 "bdev_aio_rescan", 00:04:58.292 "bdev_aio_create", 00:04:58.292 "bdev_ftl_set_property", 00:04:58.292 "bdev_ftl_get_properties", 00:04:58.292 "bdev_ftl_get_stats", 00:04:58.292 "bdev_ftl_unmap", 00:04:58.292 "bdev_ftl_unload", 00:04:58.292 "bdev_ftl_delete", 00:04:58.292 "bdev_ftl_load", 00:04:58.292 "bdev_ftl_create", 00:04:58.292 "bdev_virtio_attach_controller", 00:04:58.292 "bdev_virtio_scsi_get_devices", 00:04:58.292 "bdev_virtio_detach_controller", 00:04:58.292 "bdev_virtio_blk_set_hotplug", 00:04:58.292 "bdev_iscsi_delete", 00:04:58.292 "bdev_iscsi_create", 00:04:58.292 "bdev_iscsi_set_options", 00:04:58.292 "accel_error_inject_error", 00:04:58.292 "ioat_scan_accel_module", 00:04:58.292 "dsa_scan_accel_module", 00:04:58.292 "iaa_scan_accel_module", 00:04:58.292 "iscsi_set_options", 00:04:58.292 "iscsi_get_auth_groups", 00:04:58.292 "iscsi_auth_group_remove_secret", 00:04:58.292 "iscsi_auth_group_add_secret", 00:04:58.292 "iscsi_delete_auth_group", 00:04:58.292 "iscsi_create_auth_group", 00:04:58.292 "iscsi_set_discovery_auth", 00:04:58.292 "iscsi_get_options", 00:04:58.292 "iscsi_target_node_request_logout", 00:04:58.292 "iscsi_target_node_set_redirect", 00:04:58.292 "iscsi_target_node_set_auth", 00:04:58.292 "iscsi_target_node_add_lun", 00:04:58.292 "iscsi_get_connections", 00:04:58.292 "iscsi_portal_group_set_auth", 00:04:58.292 "iscsi_start_portal_group", 00:04:58.292 "iscsi_delete_portal_group", 00:04:58.292 "iscsi_create_portal_group", 00:04:58.292 "iscsi_get_portal_groups", 00:04:58.292 "iscsi_delete_target_node", 00:04:58.292 "iscsi_target_node_remove_pg_ig_maps", 00:04:58.292 "iscsi_target_node_add_pg_ig_maps", 00:04:58.292 "iscsi_create_target_node", 00:04:58.292 "iscsi_get_target_nodes", 00:04:58.292 "iscsi_delete_initiator_group", 00:04:58.292 "iscsi_initiator_group_remove_initiators", 00:04:58.292 "iscsi_initiator_group_add_initiators", 00:04:58.292 "iscsi_create_initiator_group", 00:04:58.292 "iscsi_get_initiator_groups", 00:04:58.292 "nvmf_set_crdt", 00:04:58.292 "nvmf_set_config", 00:04:58.292 "nvmf_set_max_subsystems", 00:04:58.292 "nvmf_subsystem_get_listeners", 00:04:58.292 "nvmf_subsystem_get_qpairs", 00:04:58.292 "nvmf_subsystem_get_controllers", 00:04:58.292 "nvmf_get_stats", 00:04:58.292 "nvmf_get_transports", 00:04:58.292 "nvmf_create_transport", 00:04:58.292 "nvmf_get_targets", 00:04:58.292 "nvmf_delete_target", 00:04:58.292 "nvmf_create_target", 00:04:58.292 "nvmf_subsystem_allow_any_host", 00:04:58.292 "nvmf_subsystem_remove_host", 00:04:58.292 "nvmf_subsystem_add_host", 00:04:58.292 "nvmf_subsystem_remove_ns", 00:04:58.292 "nvmf_subsystem_add_ns", 00:04:58.292 "nvmf_subsystem_listener_set_ana_state", 00:04:58.292 "nvmf_discovery_get_referrals", 00:04:58.292 "nvmf_discovery_remove_referral", 00:04:58.292 "nvmf_discovery_add_referral", 00:04:58.292 "nvmf_subsystem_remove_listener", 00:04:58.292 "nvmf_subsystem_add_listener", 00:04:58.292 "nvmf_delete_subsystem", 00:04:58.292 "nvmf_create_subsystem", 00:04:58.292 "nvmf_get_subsystems", 00:04:58.292 "env_dpdk_get_mem_stats", 00:04:58.292 "nbd_get_disks", 00:04:58.292 "nbd_stop_disk", 00:04:58.292 "nbd_start_disk", 00:04:58.292 "ublk_recover_disk", 00:04:58.292 "ublk_get_disks", 00:04:58.292 "ublk_stop_disk", 00:04:58.292 "ublk_start_disk", 00:04:58.292 "ublk_destroy_target", 00:04:58.292 "ublk_create_target", 00:04:58.292 "virtio_blk_create_transport", 00:04:58.292 "virtio_blk_get_transports", 00:04:58.292 "vhost_controller_set_coalescing", 00:04:58.292 "vhost_get_controllers", 00:04:58.292 "vhost_delete_controller", 00:04:58.292 "vhost_create_blk_controller", 00:04:58.292 "vhost_scsi_controller_remove_target", 00:04:58.292 "vhost_scsi_controller_add_target", 00:04:58.292 "vhost_start_scsi_controller", 00:04:58.292 "vhost_create_scsi_controller", 00:04:58.292 "thread_set_cpumask", 00:04:58.292 "framework_get_scheduler", 00:04:58.292 "framework_set_scheduler", 00:04:58.292 "framework_get_reactors", 00:04:58.292 "thread_get_io_channels", 00:04:58.292 "thread_get_pollers", 00:04:58.292 "thread_get_stats", 00:04:58.292 "framework_monitor_context_switch", 00:04:58.292 "spdk_kill_instance", 00:04:58.292 "log_enable_timestamps", 00:04:58.292 "log_get_flags", 00:04:58.292 "log_clear_flag", 00:04:58.292 "log_set_flag", 00:04:58.292 "log_get_level", 00:04:58.292 "log_set_level", 00:04:58.292 "log_get_print_level", 00:04:58.292 "log_set_print_level", 00:04:58.292 "framework_enable_cpumask_locks", 00:04:58.292 "framework_disable_cpumask_locks", 00:04:58.292 "framework_wait_init", 00:04:58.292 "framework_start_init", 00:04:58.292 "scsi_get_devices", 00:04:58.292 "bdev_get_histogram", 00:04:58.292 "bdev_enable_histogram", 00:04:58.292 "bdev_set_qos_limit", 00:04:58.292 "bdev_set_qd_sampling_period", 00:04:58.292 "bdev_get_bdevs", 00:04:58.292 "bdev_reset_iostat", 00:04:58.292 "bdev_get_iostat", 00:04:58.292 "bdev_examine", 00:04:58.292 "bdev_wait_for_examine", 00:04:58.292 "bdev_set_options", 00:04:58.292 "notify_get_notifications", 00:04:58.292 "notify_get_types", 00:04:58.292 "accel_get_stats", 00:04:58.292 "accel_set_options", 00:04:58.292 "accel_set_driver", 00:04:58.292 "accel_crypto_key_destroy", 00:04:58.292 "accel_crypto_keys_get", 00:04:58.292 "accel_crypto_key_create", 00:04:58.292 "accel_assign_opc", 00:04:58.292 "accel_get_module_info", 00:04:58.292 "accel_get_opc_assignments", 00:04:58.292 "vmd_rescan", 00:04:58.292 "vmd_remove_device", 00:04:58.292 "vmd_enable", 00:04:58.292 "sock_set_default_impl", 00:04:58.292 "sock_impl_set_options", 00:04:58.292 "sock_impl_get_options", 00:04:58.292 "iobuf_get_stats", 00:04:58.292 "iobuf_set_options", 00:04:58.292 "framework_get_pci_devices", 00:04:58.292 "framework_get_config", 00:04:58.292 "framework_get_subsystems", 00:04:58.292 "trace_get_info", 00:04:58.292 "trace_get_tpoint_group_mask", 00:04:58.292 "trace_disable_tpoint_group", 00:04:58.292 "trace_enable_tpoint_group", 00:04:58.292 "trace_clear_tpoint_mask", 00:04:58.292 "trace_set_tpoint_mask", 00:04:58.292 "spdk_get_version", 00:04:58.292 "rpc_get_methods" 00:04:58.292 ] 00:04:58.292 19:58:05 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:58.292 19:58:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.292 19:58:05 -- common/autotest_common.sh@10 -- # set +x 00:04:58.292 19:58:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:58.292 19:58:05 -- spdkcli/tcp.sh@38 -- # killprocess 56653 00:04:58.292 19:58:05 -- common/autotest_common.sh@936 -- # '[' -z 56653 ']' 00:04:58.292 19:58:05 -- common/autotest_common.sh@940 -- # kill -0 56653 00:04:58.292 19:58:05 -- common/autotest_common.sh@941 -- # uname 00:04:58.292 19:58:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.292 19:58:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56653 00:04:58.292 killing process with pid 56653 00:04:58.292 19:58:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.292 19:58:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.292 19:58:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56653' 00:04:58.292 19:58:05 -- common/autotest_common.sh@955 -- # kill 56653 00:04:58.292 19:58:05 -- common/autotest_common.sh@960 -- # wait 56653 00:04:59.665 ************************************ 00:04:59.665 END TEST spdkcli_tcp 00:04:59.665 ************************************ 00:04:59.665 00:04:59.665 real 0m3.376s 00:04:59.665 user 0m6.050s 00:04:59.665 sys 0m0.487s 00:04:59.665 19:58:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.665 19:58:07 -- common/autotest_common.sh@10 -- # set +x 00:04:59.665 19:58:07 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.665 19:58:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.665 19:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.665 19:58:07 -- common/autotest_common.sh@10 -- # set +x 00:04:59.665 ************************************ 00:04:59.665 START TEST dpdk_mem_utility 00:04:59.665 ************************************ 00:04:59.665 19:58:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.665 * Looking for test storage... 00:04:59.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:59.665 19:58:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.665 19:58:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.665 19:58:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.924 19:58:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.924 19:58:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.924 19:58:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.924 19:58:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.924 19:58:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.924 19:58:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.924 19:58:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.924 19:58:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.924 19:58:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.924 19:58:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.924 19:58:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.924 19:58:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.924 19:58:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.924 19:58:07 -- scripts/common.sh@344 -- # : 1 00:04:59.924 19:58:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.924 19:58:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.924 19:58:07 -- scripts/common.sh@364 -- # decimal 1 00:04:59.924 19:58:07 -- scripts/common.sh@352 -- # local d=1 00:04:59.924 19:58:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.924 19:58:07 -- scripts/common.sh@354 -- # echo 1 00:04:59.924 19:58:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.924 19:58:07 -- scripts/common.sh@365 -- # decimal 2 00:04:59.924 19:58:07 -- scripts/common.sh@352 -- # local d=2 00:04:59.924 19:58:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.924 19:58:07 -- scripts/common.sh@354 -- # echo 2 00:04:59.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.924 19:58:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.924 19:58:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.924 19:58:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.924 19:58:07 -- scripts/common.sh@367 -- # return 0 00:04:59.924 19:58:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.924 19:58:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.925 --rc genhtml_branch_coverage=1 00:04:59.925 --rc genhtml_function_coverage=1 00:04:59.925 --rc genhtml_legend=1 00:04:59.925 --rc geninfo_all_blocks=1 00:04:59.925 --rc geninfo_unexecuted_blocks=1 00:04:59.925 00:04:59.925 ' 00:04:59.925 19:58:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.925 --rc genhtml_branch_coverage=1 00:04:59.925 --rc genhtml_function_coverage=1 00:04:59.925 --rc genhtml_legend=1 00:04:59.925 --rc geninfo_all_blocks=1 00:04:59.925 --rc geninfo_unexecuted_blocks=1 00:04:59.925 00:04:59.925 ' 00:04:59.925 19:58:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.925 --rc genhtml_branch_coverage=1 00:04:59.925 --rc genhtml_function_coverage=1 00:04:59.925 --rc genhtml_legend=1 00:04:59.925 --rc geninfo_all_blocks=1 00:04:59.925 --rc geninfo_unexecuted_blocks=1 00:04:59.925 00:04:59.925 ' 00:04:59.925 19:58:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.925 --rc genhtml_branch_coverage=1 00:04:59.925 --rc genhtml_function_coverage=1 00:04:59.925 --rc genhtml_legend=1 00:04:59.925 --rc geninfo_all_blocks=1 00:04:59.925 --rc geninfo_unexecuted_blocks=1 00:04:59.925 00:04:59.925 ' 00:04:59.925 19:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.925 19:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56765 00:04:59.925 19:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56765 00:04:59.925 19:58:07 -- common/autotest_common.sh@829 -- # '[' -z 56765 ']' 00:04:59.925 19:58:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.925 19:58:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.925 19:58:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.925 19:58:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.925 19:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.925 19:58:07 -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 [2024-12-16 19:58:07.436475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.925 [2024-12-16 19:58:07.436762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56765 ] 00:05:00.182 [2024-12-16 19:58:07.583102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.182 [2024-12-16 19:58:07.748055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.182 [2024-12-16 19:58:07.748250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.752 19:58:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.752 19:58:08 -- common/autotest_common.sh@862 -- # return 0 00:05:00.752 19:58:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.752 19:58:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.752 19:58:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.752 19:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.752 { 00:05:00.752 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.752 } 00:05:00.752 19:58:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.752 19:58:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.752 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:00.752 1 heaps totaling size 820.000000 MiB 00:05:00.752 size: 820.000000 MiB heap id: 0 00:05:00.752 end heaps---------- 00:05:00.752 8 mempools totaling size 598.116089 MiB 00:05:00.752 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.752 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.752 size: 84.521057 MiB name: bdev_io_56765 00:05:00.752 size: 51.011292 MiB name: evtpool_56765 00:05:00.752 size: 50.003479 MiB name: msgpool_56765 00:05:00.752 size: 21.763794 MiB name: PDU_Pool 00:05:00.752 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.752 size: 0.026123 MiB name: Session_Pool 00:05:00.752 end mempools------- 00:05:00.752 6 memzones totaling size 4.142822 MiB 00:05:00.752 size: 1.000366 MiB name: RG_ring_0_56765 00:05:00.752 size: 1.000366 MiB name: RG_ring_1_56765 00:05:00.752 size: 1.000366 MiB name: RG_ring_4_56765 00:05:00.752 size: 1.000366 MiB name: RG_ring_5_56765 00:05:00.752 size: 0.125366 MiB name: RG_ring_2_56765 00:05:00.752 size: 0.015991 MiB name: RG_ring_3_56765 00:05:00.752 end memzones------- 00:05:00.752 19:58:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.752 heap id: 0 total size: 820.000000 MiB number of busy elements: 305 number of free elements: 18 00:05:00.752 list of free elements. size: 18.450317 MiB 00:05:00.752 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:00.752 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:00.752 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:00.752 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:00.752 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:00.752 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:00.752 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:00.752 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:00.752 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:00.752 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:00.752 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:00.752 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:00.752 element at address: 0x20001b000000 with size: 0.563904 MiB 00:05:00.752 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:00.752 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:00.752 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:00.752 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:00.752 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:00.752 list of standard malloc elements. size: 199.285278 MiB 00:05:00.752 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:00.752 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:00.752 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:00.752 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:00.752 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:00.752 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:00.752 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:00.752 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:00.752 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:00.752 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:00.752 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:00.752 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:00.752 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:00.753 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:00.754 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:00.754 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:00.754 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:00.755 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:00.755 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:00.755 list of memzone associated elements. size: 602.264404 MiB 00:05:00.755 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:00.755 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.755 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:00.755 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.755 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:00.755 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56765_0 00:05:00.755 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:00.755 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56765_0 00:05:00.755 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:00.755 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56765_0 00:05:00.755 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:00.755 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.755 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:00.755 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.755 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:00.755 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56765 00:05:00.755 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:00.755 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56765 00:05:00.755 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:00.755 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56765 00:05:00.755 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:00.755 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.755 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:00.755 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.755 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:00.755 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.755 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:00.755 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.755 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:00.755 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56765 00:05:00.755 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:00.755 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56765 00:05:00.755 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:00.755 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56765 00:05:00.755 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:00.755 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56765 00:05:00.755 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:00.755 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56765 00:05:00.755 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:00.755 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.755 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:00.755 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.755 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:00.755 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.755 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:00.755 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56765 00:05:00.755 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:00.755 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.755 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:00.755 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.755 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:00.755 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56765 00:05:00.755 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:00.755 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.755 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:00.755 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56765 00:05:00.755 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:00.755 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56765 00:05:00.755 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:00.755 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.755 19:58:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.755 19:58:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56765 00:05:00.755 19:58:08 -- common/autotest_common.sh@936 -- # '[' -z 56765 ']' 00:05:00.755 19:58:08 -- common/autotest_common.sh@940 -- # kill -0 56765 00:05:00.755 19:58:08 -- common/autotest_common.sh@941 -- # uname 00:05:00.755 19:58:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.755 19:58:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56765 00:05:00.755 19:58:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:00.755 19:58:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:00.755 19:58:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56765' 00:05:00.755 killing process with pid 56765 00:05:00.755 19:58:08 -- common/autotest_common.sh@955 -- # kill 56765 00:05:00.755 19:58:08 -- common/autotest_common.sh@960 -- # wait 56765 00:05:02.132 00:05:02.132 real 0m2.400s 00:05:02.132 user 0m2.343s 00:05:02.132 sys 0m0.434s 00:05:02.132 19:58:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.132 19:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.132 ************************************ 00:05:02.132 END TEST dpdk_mem_utility 00:05:02.132 ************************************ 00:05:02.132 19:58:09 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:02.132 19:58:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.132 19:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.132 19:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.132 ************************************ 00:05:02.132 START TEST event 00:05:02.132 ************************************ 00:05:02.132 19:58:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:02.132 * Looking for test storage... 00:05:02.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:02.132 19:58:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.132 19:58:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.132 19:58:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.391 19:58:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.391 19:58:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.391 19:58:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.391 19:58:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.391 19:58:09 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.391 19:58:09 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.391 19:58:09 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.391 19:58:09 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.391 19:58:09 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.391 19:58:09 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.391 19:58:09 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.391 19:58:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.391 19:58:09 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.391 19:58:09 -- scripts/common.sh@344 -- # : 1 00:05:02.391 19:58:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.391 19:58:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.391 19:58:09 -- scripts/common.sh@364 -- # decimal 1 00:05:02.391 19:58:09 -- scripts/common.sh@352 -- # local d=1 00:05:02.391 19:58:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.391 19:58:09 -- scripts/common.sh@354 -- # echo 1 00:05:02.391 19:58:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.391 19:58:09 -- scripts/common.sh@365 -- # decimal 2 00:05:02.391 19:58:09 -- scripts/common.sh@352 -- # local d=2 00:05:02.391 19:58:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.391 19:58:09 -- scripts/common.sh@354 -- # echo 2 00:05:02.391 19:58:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.391 19:58:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.391 19:58:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.391 19:58:09 -- scripts/common.sh@367 -- # return 0 00:05:02.391 19:58:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.391 19:58:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.391 --rc genhtml_branch_coverage=1 00:05:02.391 --rc genhtml_function_coverage=1 00:05:02.391 --rc genhtml_legend=1 00:05:02.391 --rc geninfo_all_blocks=1 00:05:02.391 --rc geninfo_unexecuted_blocks=1 00:05:02.391 00:05:02.391 ' 00:05:02.391 19:58:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.391 --rc genhtml_branch_coverage=1 00:05:02.391 --rc genhtml_function_coverage=1 00:05:02.391 --rc genhtml_legend=1 00:05:02.391 --rc geninfo_all_blocks=1 00:05:02.391 --rc geninfo_unexecuted_blocks=1 00:05:02.391 00:05:02.391 ' 00:05:02.391 19:58:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.391 --rc genhtml_branch_coverage=1 00:05:02.391 --rc genhtml_function_coverage=1 00:05:02.391 --rc genhtml_legend=1 00:05:02.391 --rc geninfo_all_blocks=1 00:05:02.391 --rc geninfo_unexecuted_blocks=1 00:05:02.391 00:05:02.391 ' 00:05:02.391 19:58:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.391 --rc genhtml_branch_coverage=1 00:05:02.391 --rc genhtml_function_coverage=1 00:05:02.391 --rc genhtml_legend=1 00:05:02.391 --rc geninfo_all_blocks=1 00:05:02.391 --rc geninfo_unexecuted_blocks=1 00:05:02.391 00:05:02.391 ' 00:05:02.391 19:58:09 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:02.391 19:58:09 -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.391 19:58:09 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.391 19:58:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:02.391 19:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.391 19:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.391 ************************************ 00:05:02.391 START TEST event_perf 00:05:02.391 ************************************ 00:05:02.391 19:58:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.391 Running I/O for 1 seconds...[2024-12-16 19:58:09.838616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.391 [2024-12-16 19:58:09.838813] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56850 ] 00:05:02.391 [2024-12-16 19:58:09.986858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.652 [2024-12-16 19:58:10.179145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.652 [2024-12-16 19:58:10.179447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.652 [2024-12-16 19:58:10.179508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.652 [2024-12-16 19:58:10.179498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.039 Running I/O for 1 seconds... 00:05:04.039 lcore 0: 200742 00:05:04.039 lcore 1: 200744 00:05:04.039 lcore 2: 200741 00:05:04.039 lcore 3: 200742 00:05:04.039 done. 00:05:04.039 00:05:04.039 ************************************ 00:05:04.039 END TEST event_perf 00:05:04.039 ************************************ 00:05:04.039 real 0m1.580s 00:05:04.039 user 0m4.371s 00:05:04.039 sys 0m0.093s 00:05:04.039 19:58:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.039 19:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.039 19:58:11 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.039 19:58:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:04.039 19:58:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.039 19:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.039 ************************************ 00:05:04.039 START TEST event_reactor 00:05:04.039 ************************************ 00:05:04.039 19:58:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.039 [2024-12-16 19:58:11.459395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.039 [2024-12-16 19:58:11.459651] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56895 ] 00:05:04.039 [2024-12-16 19:58:11.605209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.300 [2024-12-16 19:58:11.743518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.685 test_start 00:05:05.685 oneshot 00:05:05.685 tick 100 00:05:05.685 tick 100 00:05:05.685 tick 250 00:05:05.685 tick 100 00:05:05.685 tick 100 00:05:05.685 tick 250 00:05:05.685 tick 500 00:05:05.685 tick 100 00:05:05.685 tick 100 00:05:05.685 tick 100 00:05:05.685 tick 250 00:05:05.685 tick 100 00:05:05.685 tick 100 00:05:05.685 test_end 00:05:05.685 00:05:05.685 real 0m1.515s 00:05:05.685 user 0m1.340s 00:05:05.685 sys 0m0.068s 00:05:05.685 19:58:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.685 19:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:05.685 ************************************ 00:05:05.685 END TEST event_reactor 00:05:05.685 ************************************ 00:05:05.685 19:58:12 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.685 19:58:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:05.685 19:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.685 19:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:05.685 ************************************ 00:05:05.685 START TEST event_reactor_perf 00:05:05.685 ************************************ 00:05:05.685 19:58:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.685 [2024-12-16 19:58:13.004619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.685 [2024-12-16 19:58:13.004713] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56937 ] 00:05:05.685 [2024-12-16 19:58:13.150595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.685 [2024-12-16 19:58:13.288742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.075 test_start 00:05:07.075 test_end 00:05:07.075 Performance: 406106 events per second 00:05:07.075 00:05:07.075 real 0m1.516s 00:05:07.075 user 0m1.318s 00:05:07.075 sys 0m0.089s 00:05:07.075 19:58:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.075 19:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.075 ************************************ 00:05:07.075 END TEST event_reactor_perf 00:05:07.075 ************************************ 00:05:07.075 19:58:14 -- event/event.sh@49 -- # uname -s 00:05:07.075 19:58:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:07.075 19:58:14 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:07.075 19:58:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.075 19:58:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.075 19:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.075 ************************************ 00:05:07.075 START TEST event_scheduler 00:05:07.075 ************************************ 00:05:07.075 19:58:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:07.075 * Looking for test storage... 00:05:07.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:07.075 19:58:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:07.075 19:58:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:07.075 19:58:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:07.075 19:58:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:07.075 19:58:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:07.075 19:58:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:07.075 19:58:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:07.075 19:58:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:07.075 19:58:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:07.075 19:58:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.075 19:58:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:07.075 19:58:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:07.075 19:58:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:07.075 19:58:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:07.075 19:58:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:07.075 19:58:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:07.075 19:58:14 -- scripts/common.sh@344 -- # : 1 00:05:07.075 19:58:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:07.075 19:58:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.075 19:58:14 -- scripts/common.sh@364 -- # decimal 1 00:05:07.075 19:58:14 -- scripts/common.sh@352 -- # local d=1 00:05:07.075 19:58:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.075 19:58:14 -- scripts/common.sh@354 -- # echo 1 00:05:07.075 19:58:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:07.075 19:58:14 -- scripts/common.sh@365 -- # decimal 2 00:05:07.075 19:58:14 -- scripts/common.sh@352 -- # local d=2 00:05:07.075 19:58:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.075 19:58:14 -- scripts/common.sh@354 -- # echo 2 00:05:07.075 19:58:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:07.075 19:58:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:07.075 19:58:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:07.075 19:58:14 -- scripts/common.sh@367 -- # return 0 00:05:07.075 19:58:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.075 19:58:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.075 --rc genhtml_branch_coverage=1 00:05:07.075 --rc genhtml_function_coverage=1 00:05:07.075 --rc genhtml_legend=1 00:05:07.075 --rc geninfo_all_blocks=1 00:05:07.075 --rc geninfo_unexecuted_blocks=1 00:05:07.075 00:05:07.075 ' 00:05:07.075 19:58:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.075 --rc genhtml_branch_coverage=1 00:05:07.075 --rc genhtml_function_coverage=1 00:05:07.075 --rc genhtml_legend=1 00:05:07.075 --rc geninfo_all_blocks=1 00:05:07.075 --rc geninfo_unexecuted_blocks=1 00:05:07.076 00:05:07.076 ' 00:05:07.076 19:58:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.076 --rc genhtml_branch_coverage=1 00:05:07.076 --rc genhtml_function_coverage=1 00:05:07.076 --rc genhtml_legend=1 00:05:07.076 --rc geninfo_all_blocks=1 00:05:07.076 --rc geninfo_unexecuted_blocks=1 00:05:07.076 00:05:07.076 ' 00:05:07.076 19:58:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.076 --rc genhtml_branch_coverage=1 00:05:07.076 --rc genhtml_function_coverage=1 00:05:07.076 --rc genhtml_legend=1 00:05:07.076 --rc geninfo_all_blocks=1 00:05:07.076 --rc geninfo_unexecuted_blocks=1 00:05:07.076 00:05:07.076 ' 00:05:07.076 19:58:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:07.076 19:58:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57001 00:05:07.076 19:58:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.076 19:58:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 57001 00:05:07.076 19:58:14 -- common/autotest_common.sh@829 -- # '[' -z 57001 ']' 00:05:07.076 19:58:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.076 19:58:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.076 19:58:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.076 19:58:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.076 19:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.076 19:58:14 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:07.336 [2024-12-16 19:58:14.727365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.336 [2024-12-16 19:58:14.727478] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57001 ] 00:05:07.336 [2024-12-16 19:58:14.874500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.603 [2024-12-16 19:58:15.052238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.603 [2024-12-16 19:58:15.052410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.603 [2024-12-16 19:58:15.052781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.603 [2024-12-16 19:58:15.052800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.180 19:58:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.180 19:58:15 -- common/autotest_common.sh@862 -- # return 0 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 POWER: Env isn't set yet! 00:05:08.180 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:08.180 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:08.180 POWER: Cannot set governor of lcore 0 to userspace 00:05:08.180 POWER: Attempting to initialise PSTAT power management... 00:05:08.180 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:08.180 POWER: Cannot set governor of lcore 0 to performance 00:05:08.180 POWER: Attempting to initialise AMD PSTATE power management... 00:05:08.180 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:08.180 POWER: Cannot set governor of lcore 0 to userspace 00:05:08.180 POWER: Attempting to initialise CPPC power management... 00:05:08.180 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:08.180 POWER: Cannot set governor of lcore 0 to userspace 00:05:08.180 POWER: Attempting to initialise VM power management... 00:05:08.180 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:08.180 POWER: Unable to set Power Management Environment for lcore 0 00:05:08.180 [2024-12-16 19:58:15.549808] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:08.180 [2024-12-16 19:58:15.549823] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:08.180 [2024-12-16 19:58:15.549832] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:08.180 [2024-12-16 19:58:15.549846] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:08.180 [2024-12-16 19:58:15.549855] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:08.180 [2024-12-16 19:58:15.549862] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:08.180 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 [2024-12-16 19:58:15.768378] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:08.180 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:08.180 19:58:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.180 19:58:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 ************************************ 00:05:08.180 START TEST scheduler_create_thread 00:05:08.180 ************************************ 00:05:08.180 19:58:15 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 2 00:05:08.180 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 3 00:05:08.180 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 4 00:05:08.180 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.180 5 00:05:08.180 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.180 19:58:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:08.180 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.180 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 6 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 7 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 8 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 9 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 10 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.441 19:58:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 19:58:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.441 00:05:08.441 real 0m0.110s 00:05:08.441 user 0m0.013s 00:05:08.441 sys 0m0.002s 00:05:08.441 19:58:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.441 ************************************ 00:05:08.441 END TEST scheduler_create_thread 00:05:08.441 ************************************ 00:05:08.441 19:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.441 19:58:15 -- scheduler/scheduler.sh@46 -- # killprocess 57001 00:05:08.441 19:58:15 -- common/autotest_common.sh@936 -- # '[' -z 57001 ']' 00:05:08.441 19:58:15 -- common/autotest_common.sh@940 -- # kill -0 57001 00:05:08.441 19:58:15 -- common/autotest_common.sh@941 -- # uname 00:05:08.441 19:58:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.441 19:58:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57001 00:05:08.441 19:58:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:08.441 19:58:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:08.441 killing process with pid 57001 00:05:08.441 19:58:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57001' 00:05:08.441 19:58:15 -- common/autotest_common.sh@955 -- # kill 57001 00:05:08.441 19:58:15 -- common/autotest_common.sh@960 -- # wait 57001 00:05:09.013 [2024-12-16 19:58:16.372980] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.703 00:05:09.703 real 0m2.497s 00:05:09.703 user 0m4.031s 00:05:09.703 sys 0m0.344s 00:05:09.703 ************************************ 00:05:09.703 END TEST event_scheduler 00:05:09.703 ************************************ 00:05:09.703 19:58:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.703 19:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.703 19:58:17 -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.703 19:58:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.703 19:58:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.703 19:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.703 19:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.703 ************************************ 00:05:09.703 START TEST app_repeat 00:05:09.703 ************************************ 00:05:09.703 19:58:17 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:09.703 19:58:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.703 19:58:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.703 19:58:17 -- event/event.sh@13 -- # local nbd_list 00:05:09.703 19:58:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.703 19:58:17 -- event/event.sh@14 -- # local bdev_list 00:05:09.703 19:58:17 -- event/event.sh@15 -- # local repeat_times=4 00:05:09.703 19:58:17 -- event/event.sh@17 -- # modprobe nbd 00:05:09.703 19:58:17 -- event/event.sh@19 -- # repeat_pid=57085 00:05:09.703 19:58:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.703 Process app_repeat pid: 57085 00:05:09.703 19:58:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57085' 00:05:09.703 spdk_app_start Round 0 00:05:09.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.703 19:58:17 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.703 19:58:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.703 19:58:17 -- event/event.sh@25 -- # waitforlisten 57085 /var/tmp/spdk-nbd.sock 00:05:09.703 19:58:17 -- common/autotest_common.sh@829 -- # '[' -z 57085 ']' 00:05:09.703 19:58:17 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.703 19:58:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.703 19:58:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.703 19:58:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.703 19:58:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.703 19:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.703 [2024-12-16 19:58:17.124623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.703 [2024-12-16 19:58:17.124729] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57085 ] 00:05:09.703 [2024-12-16 19:58:17.270672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.963 [2024-12-16 19:58:17.444174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.963 [2024-12-16 19:58:17.444275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.530 19:58:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.530 19:58:17 -- common/autotest_common.sh@862 -- # return 0 00:05:10.530 19:58:17 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.788 Malloc0 00:05:10.788 19:58:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.788 Malloc1 00:05:11.046 19:58:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@12 -- # local i 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.046 /dev/nbd0 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.046 19:58:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.046 19:58:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:11.047 19:58:18 -- common/autotest_common.sh@867 -- # local i 00:05:11.047 19:58:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.047 19:58:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.047 19:58:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:11.047 19:58:18 -- common/autotest_common.sh@871 -- # break 00:05:11.047 19:58:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.047 19:58:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.047 19:58:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.047 1+0 records in 00:05:11.047 1+0 records out 00:05:11.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190663 s, 21.5 MB/s 00:05:11.047 19:58:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.047 19:58:18 -- common/autotest_common.sh@884 -- # size=4096 00:05:11.047 19:58:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.047 19:58:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.047 19:58:18 -- common/autotest_common.sh@887 -- # return 0 00:05:11.047 19:58:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.047 19:58:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.047 19:58:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.307 /dev/nbd1 00:05:11.307 19:58:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.307 19:58:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.307 19:58:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:11.307 19:58:18 -- common/autotest_common.sh@867 -- # local i 00:05:11.566 19:58:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.566 19:58:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.566 19:58:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:11.566 19:58:18 -- common/autotest_common.sh@871 -- # break 00:05:11.566 19:58:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.566 19:58:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.566 19:58:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.566 1+0 records in 00:05:11.566 1+0 records out 00:05:11.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196611 s, 20.8 MB/s 00:05:11.566 19:58:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.566 19:58:18 -- common/autotest_common.sh@884 -- # size=4096 00:05:11.566 19:58:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.566 19:58:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.566 19:58:18 -- common/autotest_common.sh@887 -- # return 0 00:05:11.566 19:58:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.566 19:58:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.566 19:58:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.566 19:58:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.566 19:58:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.566 19:58:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.566 { 00:05:11.566 "nbd_device": "/dev/nbd0", 00:05:11.566 "bdev_name": "Malloc0" 00:05:11.566 }, 00:05:11.567 { 00:05:11.567 "nbd_device": "/dev/nbd1", 00:05:11.567 "bdev_name": "Malloc1" 00:05:11.567 } 00:05:11.567 ]' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.567 { 00:05:11.567 "nbd_device": "/dev/nbd0", 00:05:11.567 "bdev_name": "Malloc0" 00:05:11.567 }, 00:05:11.567 { 00:05:11.567 "nbd_device": "/dev/nbd1", 00:05:11.567 "bdev_name": "Malloc1" 00:05:11.567 } 00:05:11.567 ]' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.567 /dev/nbd1' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.567 /dev/nbd1' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.567 256+0 records in 00:05:11.567 256+0 records out 00:05:11.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672198 s, 156 MB/s 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.567 19:58:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.825 256+0 records in 00:05:11.825 256+0 records out 00:05:11.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207021 s, 50.7 MB/s 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.825 256+0 records in 00:05:11.825 256+0 records out 00:05:11.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193704 s, 54.1 MB/s 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@51 -- # local i 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@41 -- # break 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.825 19:58:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@41 -- # break 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.084 19:58:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@65 -- # true 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.342 19:58:19 -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.342 19:58:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.603 19:58:20 -- event/event.sh@35 -- # sleep 3 00:05:13.537 [2024-12-16 19:58:20.922381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.537 [2024-12-16 19:58:21.050308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.537 [2024-12-16 19:58:21.050312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.537 [2024-12-16 19:58:21.155247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.537 [2024-12-16 19:58:21.155311] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.066 spdk_app_start Round 1 00:05:16.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.067 19:58:23 -- event/event.sh@23 -- # for i in {0..2} 00:05:16.067 19:58:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.067 19:58:23 -- event/event.sh@25 -- # waitforlisten 57085 /var/tmp/spdk-nbd.sock 00:05:16.067 19:58:23 -- common/autotest_common.sh@829 -- # '[' -z 57085 ']' 00:05:16.067 19:58:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.067 19:58:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.067 19:58:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.067 19:58:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.067 19:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:16.067 19:58:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.067 19:58:23 -- common/autotest_common.sh@862 -- # return 0 00:05:16.067 19:58:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.067 Malloc0 00:05:16.067 19:58:23 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.327 Malloc1 00:05:16.327 19:58:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.327 19:58:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.586 /dev/nbd0 00:05:16.586 19:58:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.586 19:58:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.586 19:58:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.586 19:58:24 -- common/autotest_common.sh@867 -- # local i 00:05:16.586 19:58:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.586 19:58:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.586 19:58:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.586 19:58:24 -- common/autotest_common.sh@871 -- # break 00:05:16.586 19:58:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.586 19:58:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.586 19:58:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.586 1+0 records in 00:05:16.586 1+0 records out 00:05:16.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353286 s, 11.6 MB/s 00:05:16.586 19:58:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.586 19:58:24 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.586 19:58:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.586 19:58:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.586 19:58:24 -- common/autotest_common.sh@887 -- # return 0 00:05:16.586 19:58:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.586 19:58:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.587 19:58:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.587 /dev/nbd1 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.845 19:58:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:16.845 19:58:24 -- common/autotest_common.sh@867 -- # local i 00:05:16.845 19:58:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.845 19:58:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.845 19:58:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:16.845 19:58:24 -- common/autotest_common.sh@871 -- # break 00:05:16.845 19:58:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.845 19:58:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.845 19:58:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.845 1+0 records in 00:05:16.845 1+0 records out 00:05:16.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474237 s, 8.6 MB/s 00:05:16.845 19:58:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.845 19:58:24 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.845 19:58:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.845 19:58:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.845 19:58:24 -- common/autotest_common.sh@887 -- # return 0 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.845 { 00:05:16.845 "nbd_device": "/dev/nbd0", 00:05:16.845 "bdev_name": "Malloc0" 00:05:16.845 }, 00:05:16.845 { 00:05:16.845 "nbd_device": "/dev/nbd1", 00:05:16.845 "bdev_name": "Malloc1" 00:05:16.845 } 00:05:16.845 ]' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.845 { 00:05:16.845 "nbd_device": "/dev/nbd0", 00:05:16.845 "bdev_name": "Malloc0" 00:05:16.845 }, 00:05:16.845 { 00:05:16.845 "nbd_device": "/dev/nbd1", 00:05:16.845 "bdev_name": "Malloc1" 00:05:16.845 } 00:05:16.845 ]' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.845 /dev/nbd1' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.845 /dev/nbd1' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.845 19:58:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.103 256+0 records in 00:05:17.104 256+0 records out 00:05:17.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771773 s, 136 MB/s 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.104 256+0 records in 00:05:17.104 256+0 records out 00:05:17.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187748 s, 55.9 MB/s 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.104 256+0 records in 00:05:17.104 256+0 records out 00:05:17.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315179 s, 33.3 MB/s 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.104 19:58:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@41 -- # break 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@41 -- # break 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.362 19:58:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@65 -- # true 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.619 19:58:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.620 19:58:25 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.620 19:58:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.620 19:58:25 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.620 19:58:25 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.877 19:58:25 -- event/event.sh@35 -- # sleep 3 00:05:18.810 [2024-12-16 19:58:26.331188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.068 [2024-12-16 19:58:26.512691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.068 [2024-12-16 19:58:26.512767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.068 [2024-12-16 19:58:26.664044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.068 [2024-12-16 19:58:26.664245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.983 spdk_app_start Round 2 00:05:20.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.983 19:58:28 -- event/event.sh@23 -- # for i in {0..2} 00:05:20.983 19:58:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:20.983 19:58:28 -- event/event.sh@25 -- # waitforlisten 57085 /var/tmp/spdk-nbd.sock 00:05:20.983 19:58:28 -- common/autotest_common.sh@829 -- # '[' -z 57085 ']' 00:05:20.983 19:58:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.983 19:58:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.983 19:58:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.983 19:58:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.983 19:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.268 19:58:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.268 19:58:28 -- common/autotest_common.sh@862 -- # return 0 00:05:21.268 19:58:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.268 Malloc0 00:05:21.268 19:58:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.526 Malloc1 00:05:21.526 19:58:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.526 19:58:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.785 /dev/nbd0 00:05:21.785 19:58:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.785 19:58:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.785 19:58:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:21.785 19:58:29 -- common/autotest_common.sh@867 -- # local i 00:05:21.785 19:58:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.785 19:58:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.785 19:58:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:21.785 19:58:29 -- common/autotest_common.sh@871 -- # break 00:05:21.785 19:58:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.785 19:58:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.785 19:58:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.785 1+0 records in 00:05:21.785 1+0 records out 00:05:21.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242595 s, 16.9 MB/s 00:05:21.785 19:58:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.785 19:58:29 -- common/autotest_common.sh@884 -- # size=4096 00:05:21.785 19:58:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.785 19:58:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.785 19:58:29 -- common/autotest_common.sh@887 -- # return 0 00:05:21.785 19:58:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.785 19:58:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.785 19:58:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.045 /dev/nbd1 00:05:22.045 19:58:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.045 19:58:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.045 19:58:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.045 19:58:29 -- common/autotest_common.sh@867 -- # local i 00:05:22.045 19:58:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.045 19:58:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.045 19:58:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.045 19:58:29 -- common/autotest_common.sh@871 -- # break 00:05:22.045 19:58:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.045 19:58:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.045 19:58:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.045 1+0 records in 00:05:22.045 1+0 records out 00:05:22.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220218 s, 18.6 MB/s 00:05:22.045 19:58:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.045 19:58:29 -- common/autotest_common.sh@884 -- # size=4096 00:05:22.045 19:58:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.045 19:58:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.045 19:58:29 -- common/autotest_common.sh@887 -- # return 0 00:05:22.045 19:58:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.046 19:58:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.046 19:58:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.046 19:58:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.046 19:58:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.305 { 00:05:22.305 "nbd_device": "/dev/nbd0", 00:05:22.305 "bdev_name": "Malloc0" 00:05:22.305 }, 00:05:22.305 { 00:05:22.305 "nbd_device": "/dev/nbd1", 00:05:22.305 "bdev_name": "Malloc1" 00:05:22.305 } 00:05:22.305 ]' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.305 { 00:05:22.305 "nbd_device": "/dev/nbd0", 00:05:22.305 "bdev_name": "Malloc0" 00:05:22.305 }, 00:05:22.305 { 00:05:22.305 "nbd_device": "/dev/nbd1", 00:05:22.305 "bdev_name": "Malloc1" 00:05:22.305 } 00:05:22.305 ]' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.305 /dev/nbd1' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.305 /dev/nbd1' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.305 256+0 records in 00:05:22.305 256+0 records out 00:05:22.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101423 s, 103 MB/s 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.305 256+0 records in 00:05:22.305 256+0 records out 00:05:22.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165527 s, 63.3 MB/s 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.305 256+0 records in 00:05:22.305 256+0 records out 00:05:22.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153028 s, 68.5 MB/s 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.305 19:58:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@41 -- # break 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.563 19:58:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@41 -- # break 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.563 19:58:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@65 -- # true 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.821 19:58:30 -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.821 19:58:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.079 19:58:30 -- event/event.sh@35 -- # sleep 3 00:05:24.024 [2024-12-16 19:58:31.343258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.024 [2024-12-16 19:58:31.486071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.024 [2024-12-16 19:58:31.486098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.024 [2024-12-16 19:58:31.593925] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.024 [2024-12-16 19:58:31.593965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.572 19:58:33 -- event/event.sh@38 -- # waitforlisten 57085 /var/tmp/spdk-nbd.sock 00:05:26.573 19:58:33 -- common/autotest_common.sh@829 -- # '[' -z 57085 ']' 00:05:26.573 19:58:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.573 19:58:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.573 19:58:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.573 19:58:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.573 19:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.573 19:58:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.573 19:58:33 -- common/autotest_common.sh@862 -- # return 0 00:05:26.573 19:58:33 -- event/event.sh@39 -- # killprocess 57085 00:05:26.573 19:58:33 -- common/autotest_common.sh@936 -- # '[' -z 57085 ']' 00:05:26.573 19:58:33 -- common/autotest_common.sh@940 -- # kill -0 57085 00:05:26.573 19:58:33 -- common/autotest_common.sh@941 -- # uname 00:05:26.573 19:58:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.573 19:58:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57085 00:05:26.573 killing process with pid 57085 00:05:26.573 19:58:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.573 19:58:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.573 19:58:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57085' 00:05:26.573 19:58:33 -- common/autotest_common.sh@955 -- # kill 57085 00:05:26.573 19:58:33 -- common/autotest_common.sh@960 -- # wait 57085 00:05:27.145 spdk_app_start is called in Round 0. 00:05:27.145 Shutdown signal received, stop current app iteration 00:05:27.145 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.145 spdk_app_start is called in Round 1. 00:05:27.145 Shutdown signal received, stop current app iteration 00:05:27.145 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.145 spdk_app_start is called in Round 2. 00:05:27.145 Shutdown signal received, stop current app iteration 00:05:27.145 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.145 spdk_app_start is called in Round 3. 00:05:27.145 Shutdown signal received, stop current app iteration 00:05:27.145 19:58:34 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.145 19:58:34 -- event/event.sh@42 -- # return 0 00:05:27.145 00:05:27.145 real 0m17.424s 00:05:27.145 user 0m37.240s 00:05:27.145 sys 0m1.986s 00:05:27.145 19:58:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.145 19:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:27.145 ************************************ 00:05:27.145 END TEST app_repeat 00:05:27.145 ************************************ 00:05:27.145 19:58:34 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.145 19:58:34 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.145 19:58:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.145 19:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.145 19:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:27.145 ************************************ 00:05:27.145 START TEST cpu_locks 00:05:27.145 ************************************ 00:05:27.145 19:58:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.145 * Looking for test storage... 00:05:27.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.145 19:58:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.145 19:58:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.145 19:58:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.145 19:58:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.145 19:58:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.145 19:58:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.145 19:58:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.145 19:58:34 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.145 19:58:34 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.145 19:58:34 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.145 19:58:34 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.145 19:58:34 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.145 19:58:34 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.145 19:58:34 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.145 19:58:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.145 19:58:34 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.145 19:58:34 -- scripts/common.sh@344 -- # : 1 00:05:27.146 19:58:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.146 19:58:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.146 19:58:34 -- scripts/common.sh@364 -- # decimal 1 00:05:27.146 19:58:34 -- scripts/common.sh@352 -- # local d=1 00:05:27.146 19:58:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.146 19:58:34 -- scripts/common.sh@354 -- # echo 1 00:05:27.146 19:58:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.146 19:58:34 -- scripts/common.sh@365 -- # decimal 2 00:05:27.146 19:58:34 -- scripts/common.sh@352 -- # local d=2 00:05:27.146 19:58:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.146 19:58:34 -- scripts/common.sh@354 -- # echo 2 00:05:27.146 19:58:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.146 19:58:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.146 19:58:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.146 19:58:34 -- scripts/common.sh@367 -- # return 0 00:05:27.146 19:58:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.146 19:58:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 19:58:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 19:58:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 19:58:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 19:58:34 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.146 19:58:34 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.146 19:58:34 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.146 19:58:34 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.146 19:58:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.146 19:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.146 19:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:27.146 ************************************ 00:05:27.146 START TEST default_locks 00:05:27.146 ************************************ 00:05:27.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.146 19:58:34 -- common/autotest_common.sh@1114 -- # default_locks 00:05:27.146 19:58:34 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57509 00:05:27.146 19:58:34 -- event/cpu_locks.sh@47 -- # waitforlisten 57509 00:05:27.146 19:58:34 -- common/autotest_common.sh@829 -- # '[' -z 57509 ']' 00:05:27.146 19:58:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.146 19:58:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.146 19:58:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.146 19:58:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.146 19:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:27.146 19:58:34 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.146 [2024-12-16 19:58:34.783444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.146 [2024-12-16 19:58:34.783539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57509 ] 00:05:27.407 [2024-12-16 19:58:34.933278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.668 [2024-12-16 19:58:35.146780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.668 [2024-12-16 19:58:35.147021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.602 19:58:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.602 19:58:36 -- common/autotest_common.sh@862 -- # return 0 00:05:28.602 19:58:36 -- event/cpu_locks.sh@49 -- # locks_exist 57509 00:05:28.602 19:58:36 -- event/cpu_locks.sh@22 -- # lslocks -p 57509 00:05:28.602 19:58:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.861 19:58:36 -- event/cpu_locks.sh@50 -- # killprocess 57509 00:05:28.861 19:58:36 -- common/autotest_common.sh@936 -- # '[' -z 57509 ']' 00:05:28.861 19:58:36 -- common/autotest_common.sh@940 -- # kill -0 57509 00:05:28.861 19:58:36 -- common/autotest_common.sh@941 -- # uname 00:05:28.861 19:58:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.861 19:58:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57509 00:05:28.861 killing process with pid 57509 00:05:28.861 19:58:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.861 19:58:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.861 19:58:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57509' 00:05:28.861 19:58:36 -- common/autotest_common.sh@955 -- # kill 57509 00:05:28.861 19:58:36 -- common/autotest_common.sh@960 -- # wait 57509 00:05:30.253 19:58:37 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57509 00:05:30.253 19:58:37 -- common/autotest_common.sh@650 -- # local es=0 00:05:30.253 19:58:37 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57509 00:05:30.253 19:58:37 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.253 19:58:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.253 19:58:37 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.253 19:58:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.253 19:58:37 -- common/autotest_common.sh@653 -- # waitforlisten 57509 00:05:30.253 19:58:37 -- common/autotest_common.sh@829 -- # '[' -z 57509 ']' 00:05:30.253 19:58:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.253 ERROR: process (pid: 57509) is no longer running 00:05:30.253 19:58:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.253 19:58:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.253 19:58:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.253 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.253 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57509) - No such process 00:05:30.253 19:58:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.253 19:58:37 -- common/autotest_common.sh@862 -- # return 1 00:05:30.253 19:58:37 -- common/autotest_common.sh@653 -- # es=1 00:05:30.253 19:58:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.253 19:58:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.253 19:58:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.253 ************************************ 00:05:30.253 END TEST default_locks 00:05:30.253 ************************************ 00:05:30.253 19:58:37 -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.253 19:58:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.253 19:58:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.253 19:58:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.253 00:05:30.253 real 0m3.095s 00:05:30.253 user 0m3.099s 00:05:30.253 sys 0m0.491s 00:05:30.253 19:58:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.253 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.253 19:58:37 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.253 19:58:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.253 19:58:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.253 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.254 ************************************ 00:05:30.254 START TEST default_locks_via_rpc 00:05:30.254 ************************************ 00:05:30.254 19:58:37 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:30.254 19:58:37 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57575 00:05:30.254 19:58:37 -- event/cpu_locks.sh@63 -- # waitforlisten 57575 00:05:30.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.254 19:58:37 -- common/autotest_common.sh@829 -- # '[' -z 57575 ']' 00:05:30.254 19:58:37 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.254 19:58:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.254 19:58:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.254 19:58:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.254 19:58:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.254 19:58:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.515 [2024-12-16 19:58:37.914199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.515 [2024-12-16 19:58:37.914325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57575 ] 00:05:30.515 [2024-12-16 19:58:38.051252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.775 [2024-12-16 19:58:38.187495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.775 [2024-12-16 19:58:38.187654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.160 19:58:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.160 19:58:39 -- common/autotest_common.sh@862 -- # return 0 00:05:32.160 19:58:39 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.160 19:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.160 19:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.160 19:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.160 19:58:39 -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.160 19:58:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.160 19:58:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.160 19:58:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.160 19:58:39 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.160 19:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.160 19:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.160 19:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.160 19:58:39 -- event/cpu_locks.sh@71 -- # locks_exist 57575 00:05:32.160 19:58:39 -- event/cpu_locks.sh@22 -- # lslocks -p 57575 00:05:32.160 19:58:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.160 19:58:39 -- event/cpu_locks.sh@73 -- # killprocess 57575 00:05:32.160 19:58:39 -- common/autotest_common.sh@936 -- # '[' -z 57575 ']' 00:05:32.160 19:58:39 -- common/autotest_common.sh@940 -- # kill -0 57575 00:05:32.160 19:58:39 -- common/autotest_common.sh@941 -- # uname 00:05:32.160 19:58:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.160 19:58:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57575 00:05:32.160 killing process with pid 57575 00:05:32.160 19:58:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.160 19:58:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.160 19:58:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57575' 00:05:32.160 19:58:39 -- common/autotest_common.sh@955 -- # kill 57575 00:05:32.160 19:58:39 -- common/autotest_common.sh@960 -- # wait 57575 00:05:33.540 ************************************ 00:05:33.540 END TEST default_locks_via_rpc 00:05:33.540 ************************************ 00:05:33.540 00:05:33.540 real 0m3.062s 00:05:33.540 user 0m3.182s 00:05:33.540 sys 0m0.387s 00:05:33.540 19:58:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.540 19:58:40 -- common/autotest_common.sh@10 -- # set +x 00:05:33.540 19:58:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.540 19:58:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.540 19:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.540 19:58:40 -- common/autotest_common.sh@10 -- # set +x 00:05:33.540 ************************************ 00:05:33.540 START TEST non_locking_app_on_locked_coremask 00:05:33.540 ************************************ 00:05:33.540 19:58:40 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:33.540 19:58:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57640 00:05:33.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.540 19:58:40 -- event/cpu_locks.sh@81 -- # waitforlisten 57640 /var/tmp/spdk.sock 00:05:33.540 19:58:40 -- common/autotest_common.sh@829 -- # '[' -z 57640 ']' 00:05:33.540 19:58:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.540 19:58:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.540 19:58:40 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.540 19:58:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.540 19:58:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.540 19:58:40 -- common/autotest_common.sh@10 -- # set +x 00:05:33.540 [2024-12-16 19:58:41.009934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.540 [2024-12-16 19:58:41.010044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57640 ] 00:05:33.540 [2024-12-16 19:58:41.158331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.798 [2024-12-16 19:58:41.297405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.798 [2024-12-16 19:58:41.297559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.366 19:58:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.366 19:58:41 -- common/autotest_common.sh@862 -- # return 0 00:05:34.366 19:58:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57651 00:05:34.366 19:58:41 -- event/cpu_locks.sh@85 -- # waitforlisten 57651 /var/tmp/spdk2.sock 00:05:34.366 19:58:41 -- common/autotest_common.sh@829 -- # '[' -z 57651 ']' 00:05:34.366 19:58:41 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:34.366 19:58:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.366 19:58:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.366 19:58:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.366 19:58:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.366 19:58:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.366 [2024-12-16 19:58:41.837096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.366 [2024-12-16 19:58:41.837508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57651 ] 00:05:34.366 [2024-12-16 19:58:41.977224] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.366 [2024-12-16 19:58:41.977261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.628 [2024-12-16 19:58:42.257256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.628 [2024-12-16 19:58:42.257426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.011 19:58:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.011 19:58:43 -- common/autotest_common.sh@862 -- # return 0 00:05:36.011 19:58:43 -- event/cpu_locks.sh@87 -- # locks_exist 57640 00:05:36.011 19:58:43 -- event/cpu_locks.sh@22 -- # lslocks -p 57640 00:05:36.011 19:58:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.011 19:58:43 -- event/cpu_locks.sh@89 -- # killprocess 57640 00:05:36.011 19:58:43 -- common/autotest_common.sh@936 -- # '[' -z 57640 ']' 00:05:36.011 19:58:43 -- common/autotest_common.sh@940 -- # kill -0 57640 00:05:36.011 19:58:43 -- common/autotest_common.sh@941 -- # uname 00:05:36.011 19:58:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.011 19:58:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57640 00:05:36.011 killing process with pid 57640 00:05:36.011 19:58:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.011 19:58:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.011 19:58:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57640' 00:05:36.011 19:58:43 -- common/autotest_common.sh@955 -- # kill 57640 00:05:36.011 19:58:43 -- common/autotest_common.sh@960 -- # wait 57640 00:05:38.548 19:58:46 -- event/cpu_locks.sh@90 -- # killprocess 57651 00:05:38.548 19:58:46 -- common/autotest_common.sh@936 -- # '[' -z 57651 ']' 00:05:38.548 19:58:46 -- common/autotest_common.sh@940 -- # kill -0 57651 00:05:38.548 19:58:46 -- common/autotest_common.sh@941 -- # uname 00:05:38.548 19:58:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.548 19:58:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57651 00:05:38.549 killing process with pid 57651 00:05:38.549 19:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.549 19:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.549 19:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57651' 00:05:38.549 19:58:46 -- common/autotest_common.sh@955 -- # kill 57651 00:05:38.549 19:58:46 -- common/autotest_common.sh@960 -- # wait 57651 00:05:39.925 ************************************ 00:05:39.925 END TEST non_locking_app_on_locked_coremask 00:05:39.925 ************************************ 00:05:39.925 00:05:39.925 real 0m6.272s 00:05:39.925 user 0m6.610s 00:05:39.925 sys 0m0.785s 00:05:39.925 19:58:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.925 19:58:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.925 19:58:47 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:39.925 19:58:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.925 19:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.925 19:58:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.925 ************************************ 00:05:39.925 START TEST locking_app_on_unlocked_coremask 00:05:39.925 ************************************ 00:05:39.925 19:58:47 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:39.925 19:58:47 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57749 00:05:39.925 19:58:47 -- event/cpu_locks.sh@99 -- # waitforlisten 57749 /var/tmp/spdk.sock 00:05:39.925 19:58:47 -- common/autotest_common.sh@829 -- # '[' -z 57749 ']' 00:05:39.925 19:58:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.925 19:58:47 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:39.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.925 19:58:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.925 19:58:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.925 19:58:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.925 19:58:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.925 [2024-12-16 19:58:47.310105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.925 [2024-12-16 19:58:47.310194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57749 ] 00:05:39.925 [2024-12-16 19:58:47.450529] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.925 [2024-12-16 19:58:47.450690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.183 [2024-12-16 19:58:47.591385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.183 [2024-12-16 19:58:47.591528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.749 19:58:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.749 19:58:48 -- common/autotest_common.sh@862 -- # return 0 00:05:40.749 19:58:48 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57765 00:05:40.749 19:58:48 -- event/cpu_locks.sh@103 -- # waitforlisten 57765 /var/tmp/spdk2.sock 00:05:40.749 19:58:48 -- common/autotest_common.sh@829 -- # '[' -z 57765 ']' 00:05:40.749 19:58:48 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.749 19:58:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.749 19:58:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.749 19:58:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.749 19:58:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.749 19:58:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.749 [2024-12-16 19:58:48.197825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.749 [2024-12-16 19:58:48.198102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57765 ] 00:05:40.749 [2024-12-16 19:58:48.346341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.009 [2024-12-16 19:58:48.632551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.009 [2024-12-16 19:58:48.632705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.391 19:58:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.391 19:58:49 -- common/autotest_common.sh@862 -- # return 0 00:05:42.391 19:58:49 -- event/cpu_locks.sh@105 -- # locks_exist 57765 00:05:42.391 19:58:49 -- event/cpu_locks.sh@22 -- # lslocks -p 57765 00:05:42.391 19:58:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.391 19:58:49 -- event/cpu_locks.sh@107 -- # killprocess 57749 00:05:42.391 19:58:49 -- common/autotest_common.sh@936 -- # '[' -z 57749 ']' 00:05:42.391 19:58:49 -- common/autotest_common.sh@940 -- # kill -0 57749 00:05:42.391 19:58:49 -- common/autotest_common.sh@941 -- # uname 00:05:42.391 19:58:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.391 19:58:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57749 00:05:42.391 killing process with pid 57749 00:05:42.391 19:58:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.391 19:58:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.391 19:58:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57749' 00:05:42.391 19:58:50 -- common/autotest_common.sh@955 -- # kill 57749 00:05:42.391 19:58:50 -- common/autotest_common.sh@960 -- # wait 57749 00:05:44.959 19:58:52 -- event/cpu_locks.sh@108 -- # killprocess 57765 00:05:44.959 19:58:52 -- common/autotest_common.sh@936 -- # '[' -z 57765 ']' 00:05:44.959 19:58:52 -- common/autotest_common.sh@940 -- # kill -0 57765 00:05:44.959 19:58:52 -- common/autotest_common.sh@941 -- # uname 00:05:44.959 19:58:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.959 19:58:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57765 00:05:44.959 killing process with pid 57765 00:05:44.959 19:58:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.959 19:58:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.959 19:58:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57765' 00:05:44.959 19:58:52 -- common/autotest_common.sh@955 -- # kill 57765 00:05:44.959 19:58:52 -- common/autotest_common.sh@960 -- # wait 57765 00:05:46.336 ************************************ 00:05:46.336 END TEST locking_app_on_unlocked_coremask 00:05:46.336 ************************************ 00:05:46.336 00:05:46.336 real 0m6.349s 00:05:46.336 user 0m6.775s 00:05:46.336 sys 0m0.776s 00:05:46.336 19:58:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.336 19:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.336 19:58:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.336 19:58:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.336 19:58:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.336 19:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.336 ************************************ 00:05:46.336 START TEST locking_app_on_locked_coremask 00:05:46.336 ************************************ 00:05:46.336 19:58:53 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:46.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.336 19:58:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57864 00:05:46.336 19:58:53 -- event/cpu_locks.sh@116 -- # waitforlisten 57864 /var/tmp/spdk.sock 00:05:46.336 19:58:53 -- common/autotest_common.sh@829 -- # '[' -z 57864 ']' 00:05:46.336 19:58:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.336 19:58:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.336 19:58:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.336 19:58:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.336 19:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.336 19:58:53 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.336 [2024-12-16 19:58:53.708863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.336 [2024-12-16 19:58:53.708973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57864 ] 00:05:46.336 [2024-12-16 19:58:53.860149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.595 [2024-12-16 19:58:54.028566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.595 [2024-12-16 19:58:54.028766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.530 19:58:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.530 19:58:55 -- common/autotest_common.sh@862 -- # return 0 00:05:47.530 19:58:55 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.530 19:58:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57882 00:05:47.530 19:58:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57882 /var/tmp/spdk2.sock 00:05:47.530 19:58:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:47.530 19:58:55 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57882 /var/tmp/spdk2.sock 00:05:47.530 19:58:55 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.530 19:58:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.530 19:58:55 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.530 19:58:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.530 19:58:55 -- common/autotest_common.sh@653 -- # waitforlisten 57882 /var/tmp/spdk2.sock 00:05:47.530 19:58:55 -- common/autotest_common.sh@829 -- # '[' -z 57882 ']' 00:05:47.530 19:58:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.530 19:58:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.530 19:58:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.530 19:58:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.530 19:58:55 -- common/autotest_common.sh@10 -- # set +x 00:05:47.788 [2024-12-16 19:58:55.175289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.788 [2024-12-16 19:58:55.175411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57882 ] 00:05:47.788 [2024-12-16 19:58:55.330420] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57864 has claimed it. 00:05:47.788 [2024-12-16 19:58:55.330472] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.353 ERROR: process (pid: 57882) is no longer running 00:05:48.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57882) - No such process 00:05:48.353 19:58:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.353 19:58:55 -- common/autotest_common.sh@862 -- # return 1 00:05:48.353 19:58:55 -- common/autotest_common.sh@653 -- # es=1 00:05:48.353 19:58:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.353 19:58:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.353 19:58:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.353 19:58:55 -- event/cpu_locks.sh@122 -- # locks_exist 57864 00:05:48.353 19:58:55 -- event/cpu_locks.sh@22 -- # lslocks -p 57864 00:05:48.353 19:58:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.611 19:58:56 -- event/cpu_locks.sh@124 -- # killprocess 57864 00:05:48.611 19:58:56 -- common/autotest_common.sh@936 -- # '[' -z 57864 ']' 00:05:48.611 19:58:56 -- common/autotest_common.sh@940 -- # kill -0 57864 00:05:48.611 19:58:56 -- common/autotest_common.sh@941 -- # uname 00:05:48.611 19:58:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.611 19:58:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57864 00:05:48.611 killing process with pid 57864 00:05:48.611 19:58:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.611 19:58:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.611 19:58:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57864' 00:05:48.611 19:58:56 -- common/autotest_common.sh@955 -- # kill 57864 00:05:48.611 19:58:56 -- common/autotest_common.sh@960 -- # wait 57864 00:05:49.545 ************************************ 00:05:49.545 END TEST locking_app_on_locked_coremask 00:05:49.545 ************************************ 00:05:49.545 00:05:49.545 real 0m3.540s 00:05:49.545 user 0m3.813s 00:05:49.545 sys 0m0.533s 00:05:49.545 19:58:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.545 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:05:49.803 19:58:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.803 19:58:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.803 19:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.803 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:05:49.803 ************************************ 00:05:49.803 START TEST locking_overlapped_coremask 00:05:49.803 ************************************ 00:05:49.803 19:58:57 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:49.803 19:58:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57935 00:05:49.803 19:58:57 -- event/cpu_locks.sh@133 -- # waitforlisten 57935 /var/tmp/spdk.sock 00:05:49.803 19:58:57 -- common/autotest_common.sh@829 -- # '[' -z 57935 ']' 00:05:49.803 19:58:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.803 19:58:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.803 19:58:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.803 19:58:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.803 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:05:49.803 19:58:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.803 [2024-12-16 19:58:57.278908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.803 [2024-12-16 19:58:57.278996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57935 ] 00:05:49.803 [2024-12-16 19:58:57.418526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.062 [2024-12-16 19:58:57.557455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.062 [2024-12-16 19:58:57.557839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.062 [2024-12-16 19:58:57.557962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.062 [2024-12-16 19:58:57.558085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.631 19:58:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.631 19:58:58 -- common/autotest_common.sh@862 -- # return 0 00:05:50.631 19:58:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57953 00:05:50.631 19:58:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57953 /var/tmp/spdk2.sock 00:05:50.631 19:58:58 -- common/autotest_common.sh@650 -- # local es=0 00:05:50.631 19:58:58 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.631 19:58:58 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57953 /var/tmp/spdk2.sock 00:05:50.631 19:58:58 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.631 19:58:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.631 19:58:58 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.631 19:58:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.631 19:58:58 -- common/autotest_common.sh@653 -- # waitforlisten 57953 /var/tmp/spdk2.sock 00:05:50.631 19:58:58 -- common/autotest_common.sh@829 -- # '[' -z 57953 ']' 00:05:50.631 19:58:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.631 19:58:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.631 19:58:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.631 19:58:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.631 19:58:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.631 [2024-12-16 19:58:58.167766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.631 [2024-12-16 19:58:58.168048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57953 ] 00:05:50.889 [2024-12-16 19:58:58.322221] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57935 has claimed it. 00:05:50.889 [2024-12-16 19:58:58.322389] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.456 ERROR: process (pid: 57953) is no longer running 00:05:51.456 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57953) - No such process 00:05:51.456 19:58:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.456 19:58:58 -- common/autotest_common.sh@862 -- # return 1 00:05:51.456 19:58:58 -- common/autotest_common.sh@653 -- # es=1 00:05:51.456 19:58:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.456 19:58:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.456 19:58:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.456 19:58:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.456 19:58:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.456 19:58:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.456 19:58:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.456 19:58:58 -- event/cpu_locks.sh@141 -- # killprocess 57935 00:05:51.456 19:58:58 -- common/autotest_common.sh@936 -- # '[' -z 57935 ']' 00:05:51.456 19:58:58 -- common/autotest_common.sh@940 -- # kill -0 57935 00:05:51.456 19:58:58 -- common/autotest_common.sh@941 -- # uname 00:05:51.456 19:58:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.456 19:58:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57935 00:05:51.456 killing process with pid 57935 00:05:51.456 19:58:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.456 19:58:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.456 19:58:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57935' 00:05:51.456 19:58:58 -- common/autotest_common.sh@955 -- # kill 57935 00:05:51.456 19:58:58 -- common/autotest_common.sh@960 -- # wait 57935 00:05:52.394 ************************************ 00:05:52.394 END TEST locking_overlapped_coremask 00:05:52.394 ************************************ 00:05:52.394 00:05:52.394 real 0m2.770s 00:05:52.394 user 0m7.350s 00:05:52.394 sys 0m0.378s 00:05:52.394 19:58:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.394 19:58:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.394 19:59:00 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.394 19:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.394 19:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.394 19:59:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.394 ************************************ 00:05:52.394 START TEST locking_overlapped_coremask_via_rpc 00:05:52.394 ************************************ 00:05:52.394 19:59:00 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:52.394 19:59:00 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58006 00:05:52.394 19:59:00 -- event/cpu_locks.sh@149 -- # waitforlisten 58006 /var/tmp/spdk.sock 00:05:52.394 19:59:00 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.394 19:59:00 -- common/autotest_common.sh@829 -- # '[' -z 58006 ']' 00:05:52.394 19:59:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.652 19:59:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.652 19:59:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.652 19:59:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.652 19:59:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.652 [2024-12-16 19:59:00.099555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.652 [2024-12-16 19:59:00.099797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58006 ] 00:05:52.652 [2024-12-16 19:59:00.248861] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.652 [2024-12-16 19:59:00.248899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.910 [2024-12-16 19:59:00.418958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.911 [2024-12-16 19:59:00.419382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.911 [2024-12-16 19:59:00.419660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.911 [2024-12-16 19:59:00.419748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.284 19:59:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.284 19:59:01 -- common/autotest_common.sh@862 -- # return 0 00:05:54.284 19:59:01 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:54.284 19:59:01 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58031 00:05:54.284 19:59:01 -- event/cpu_locks.sh@153 -- # waitforlisten 58031 /var/tmp/spdk2.sock 00:05:54.284 19:59:01 -- common/autotest_common.sh@829 -- # '[' -z 58031 ']' 00:05:54.284 19:59:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.284 19:59:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.284 19:59:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.284 19:59:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.284 19:59:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.284 [2024-12-16 19:59:01.642332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.284 [2024-12-16 19:59:01.642598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58031 ] 00:05:54.284 [2024-12-16 19:59:01.796027] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.284 [2024-12-16 19:59:01.799308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.545 [2024-12-16 19:59:02.081404] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.545 [2024-12-16 19:59:02.081919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.545 [2024-12-16 19:59:02.082084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.545 [2024-12-16 19:59:02.082109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.487 19:59:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.487 19:59:03 -- common/autotest_common.sh@862 -- # return 0 00:05:55.487 19:59:03 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.487 19:59:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.487 19:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.487 19:59:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.487 19:59:03 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.487 19:59:03 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.487 19:59:03 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.487 19:59:03 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:55.749 19:59:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.749 19:59:03 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:55.749 19:59:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.749 19:59:03 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.749 19:59:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.749 19:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.749 [2024-12-16 19:59:03.135428] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58006 has claimed it. 00:05:55.749 request: 00:05:55.749 { 00:05:55.749 "method": "framework_enable_cpumask_locks", 00:05:55.749 "req_id": 1 00:05:55.749 } 00:05:55.749 Got JSON-RPC error response 00:05:55.749 response: 00:05:55.749 { 00:05:55.749 "code": -32603, 00:05:55.749 "message": "Failed to claim CPU core: 2" 00:05:55.749 } 00:05:55.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.749 19:59:03 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:55.749 19:59:03 -- common/autotest_common.sh@653 -- # es=1 00:05:55.749 19:59:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.749 19:59:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.749 19:59:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.749 19:59:03 -- event/cpu_locks.sh@158 -- # waitforlisten 58006 /var/tmp/spdk.sock 00:05:55.749 19:59:03 -- common/autotest_common.sh@829 -- # '[' -z 58006 ']' 00:05:55.749 19:59:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.749 19:59:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.749 19:59:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.749 19:59:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.749 19:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.749 19:59:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.749 19:59:03 -- common/autotest_common.sh@862 -- # return 0 00:05:55.749 19:59:03 -- event/cpu_locks.sh@159 -- # waitforlisten 58031 /var/tmp/spdk2.sock 00:05:55.749 19:59:03 -- common/autotest_common.sh@829 -- # '[' -z 58031 ']' 00:05:55.749 19:59:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.749 19:59:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.749 19:59:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.749 19:59:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.749 19:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.008 19:59:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.008 19:59:03 -- common/autotest_common.sh@862 -- # return 0 00:05:56.008 19:59:03 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:56.008 19:59:03 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.008 19:59:03 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.008 19:59:03 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.008 00:05:56.008 real 0m3.506s 00:05:56.008 user 0m1.262s 00:05:56.008 sys 0m0.166s 00:05:56.008 19:59:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.008 19:59:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.008 ************************************ 00:05:56.008 END TEST locking_overlapped_coremask_via_rpc 00:05:56.008 ************************************ 00:05:56.008 19:59:03 -- event/cpu_locks.sh@174 -- # cleanup 00:05:56.008 19:59:03 -- event/cpu_locks.sh@15 -- # [[ -z 58006 ]] 00:05:56.008 19:59:03 -- event/cpu_locks.sh@15 -- # killprocess 58006 00:05:56.008 19:59:03 -- common/autotest_common.sh@936 -- # '[' -z 58006 ']' 00:05:56.008 19:59:03 -- common/autotest_common.sh@940 -- # kill -0 58006 00:05:56.008 19:59:03 -- common/autotest_common.sh@941 -- # uname 00:05:56.008 19:59:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.008 19:59:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58006 00:05:56.008 killing process with pid 58006 00:05:56.008 19:59:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.008 19:59:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.008 19:59:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58006' 00:05:56.008 19:59:03 -- common/autotest_common.sh@955 -- # kill 58006 00:05:56.008 19:59:03 -- common/autotest_common.sh@960 -- # wait 58006 00:05:57.384 19:59:04 -- event/cpu_locks.sh@16 -- # [[ -z 58031 ]] 00:05:57.384 19:59:04 -- event/cpu_locks.sh@16 -- # killprocess 58031 00:05:57.384 19:59:04 -- common/autotest_common.sh@936 -- # '[' -z 58031 ']' 00:05:57.384 19:59:04 -- common/autotest_common.sh@940 -- # kill -0 58031 00:05:57.384 19:59:04 -- common/autotest_common.sh@941 -- # uname 00:05:57.384 19:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.384 19:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58031 00:05:57.384 killing process with pid 58031 00:05:57.384 19:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:57.384 19:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:57.384 19:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58031' 00:05:57.384 19:59:04 -- common/autotest_common.sh@955 -- # kill 58031 00:05:57.384 19:59:04 -- common/autotest_common.sh@960 -- # wait 58031 00:05:58.759 19:59:06 -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.759 19:59:06 -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.759 19:59:06 -- event/cpu_locks.sh@15 -- # [[ -z 58006 ]] 00:05:58.759 19:59:06 -- event/cpu_locks.sh@15 -- # killprocess 58006 00:05:58.759 19:59:06 -- common/autotest_common.sh@936 -- # '[' -z 58006 ']' 00:05:58.759 19:59:06 -- common/autotest_common.sh@940 -- # kill -0 58006 00:05:58.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58006) - No such process 00:05:58.759 19:59:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58006 is not found' 00:05:58.759 Process with pid 58006 is not found 00:05:58.759 19:59:06 -- event/cpu_locks.sh@16 -- # [[ -z 58031 ]] 00:05:58.759 19:59:06 -- event/cpu_locks.sh@16 -- # killprocess 58031 00:05:58.759 19:59:06 -- common/autotest_common.sh@936 -- # '[' -z 58031 ']' 00:05:58.759 Process with pid 58031 is not found 00:05:58.759 19:59:06 -- common/autotest_common.sh@940 -- # kill -0 58031 00:05:58.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58031) - No such process 00:05:58.759 19:59:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58031 is not found' 00:05:58.759 19:59:06 -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.759 00:05:58.759 real 0m31.577s 00:05:58.759 user 0m53.791s 00:05:58.760 sys 0m4.288s 00:05:58.760 19:59:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.760 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 ************************************ 00:05:58.760 END TEST cpu_locks 00:05:58.760 ************************************ 00:05:58.760 ************************************ 00:05:58.760 END TEST event 00:05:58.760 ************************************ 00:05:58.760 00:05:58.760 real 0m56.510s 00:05:58.760 user 1m42.249s 00:05:58.760 sys 0m7.076s 00:05:58.760 19:59:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.760 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 19:59:06 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.760 19:59:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.760 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 ************************************ 00:05:58.760 START TEST thread 00:05:58.760 ************************************ 00:05:58.760 19:59:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.760 * Looking for test storage... 00:05:58.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.760 19:59:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.760 19:59:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.760 19:59:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.760 19:59:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.760 19:59:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.760 19:59:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.760 19:59:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.760 19:59:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.760 19:59:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.760 19:59:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.760 19:59:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.760 19:59:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.760 19:59:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.760 19:59:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.760 19:59:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.760 19:59:06 -- scripts/common.sh@344 -- # : 1 00:05:58.760 19:59:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.760 19:59:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.760 19:59:06 -- scripts/common.sh@364 -- # decimal 1 00:05:58.760 19:59:06 -- scripts/common.sh@352 -- # local d=1 00:05:58.760 19:59:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.760 19:59:06 -- scripts/common.sh@354 -- # echo 1 00:05:58.760 19:59:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.760 19:59:06 -- scripts/common.sh@365 -- # decimal 2 00:05:58.760 19:59:06 -- scripts/common.sh@352 -- # local d=2 00:05:58.760 19:59:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.760 19:59:06 -- scripts/common.sh@354 -- # echo 2 00:05:58.760 19:59:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.760 19:59:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.760 19:59:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.760 19:59:06 -- scripts/common.sh@367 -- # return 0 00:05:58.760 19:59:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.760 --rc genhtml_branch_coverage=1 00:05:58.760 --rc genhtml_function_coverage=1 00:05:58.760 --rc genhtml_legend=1 00:05:58.760 --rc geninfo_all_blocks=1 00:05:58.760 --rc geninfo_unexecuted_blocks=1 00:05:58.760 00:05:58.760 ' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.760 --rc genhtml_branch_coverage=1 00:05:58.760 --rc genhtml_function_coverage=1 00:05:58.760 --rc genhtml_legend=1 00:05:58.760 --rc geninfo_all_blocks=1 00:05:58.760 --rc geninfo_unexecuted_blocks=1 00:05:58.760 00:05:58.760 ' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.760 --rc genhtml_branch_coverage=1 00:05:58.760 --rc genhtml_function_coverage=1 00:05:58.760 --rc genhtml_legend=1 00:05:58.760 --rc geninfo_all_blocks=1 00:05:58.760 --rc geninfo_unexecuted_blocks=1 00:05:58.760 00:05:58.760 ' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.760 --rc genhtml_branch_coverage=1 00:05:58.760 --rc genhtml_function_coverage=1 00:05:58.760 --rc genhtml_legend=1 00:05:58.760 --rc geninfo_all_blocks=1 00:05:58.760 --rc geninfo_unexecuted_blocks=1 00:05:58.760 00:05:58.760 ' 00:05:58.760 19:59:06 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.760 19:59:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:58.760 19:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.760 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 ************************************ 00:05:58.760 START TEST thread_poller_perf 00:05:58.760 ************************************ 00:05:58.760 19:59:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.760 [2024-12-16 19:59:06.394746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.760 [2024-12-16 19:59:06.394989] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58187 ] 00:05:59.019 [2024-12-16 19:59:06.552616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.278 [2024-12-16 19:59:06.729024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.278 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:00.662 [2024-12-16T19:59:08.302Z] ====================================== 00:06:00.662 [2024-12-16T19:59:08.302Z] busy:2614704498 (cyc) 00:06:00.662 [2024-12-16T19:59:08.302Z] total_run_count: 291000 00:06:00.662 [2024-12-16T19:59:08.302Z] tsc_hz: 2600000000 (cyc) 00:06:00.662 [2024-12-16T19:59:08.302Z] ====================================== 00:06:00.662 [2024-12-16T19:59:08.302Z] poller_cost: 8985 (cyc), 3455 (nsec) 00:06:00.662 00:06:00.662 real 0m1.637s 00:06:00.662 user 0m1.446s 00:06:00.662 sys 0m0.080s 00:06:00.662 19:59:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.662 ************************************ 00:06:00.662 END TEST thread_poller_perf 00:06:00.662 ************************************ 00:06:00.662 19:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.662 19:59:08 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.662 19:59:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:00.662 19:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.662 19:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.662 ************************************ 00:06:00.662 START TEST thread_poller_perf 00:06:00.662 ************************************ 00:06:00.662 19:59:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.662 [2024-12-16 19:59:08.070614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.662 [2024-12-16 19:59:08.070691] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:06:00.662 [2024-12-16 19:59:08.215550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.923 [2024-12-16 19:59:08.386505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.923 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.301 [2024-12-16T19:59:09.941Z] ====================================== 00:06:02.301 [2024-12-16T19:59:09.941Z] busy:2604545866 (cyc) 00:06:02.301 [2024-12-16T19:59:09.941Z] total_run_count: 3975000 00:06:02.301 [2024-12-16T19:59:09.941Z] tsc_hz: 2600000000 (cyc) 00:06:02.301 [2024-12-16T19:59:09.941Z] ====================================== 00:06:02.301 [2024-12-16T19:59:09.941Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:02.301 ************************************ 00:06:02.301 END TEST thread_poller_perf 00:06:02.301 ************************************ 00:06:02.301 00:06:02.301 real 0m1.596s 00:06:02.301 user 0m1.423s 00:06:02.301 sys 0m0.065s 00:06:02.301 19:59:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.301 19:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.301 19:59:09 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.301 00:06:02.301 real 0m3.450s 00:06:02.301 user 0m2.983s 00:06:02.301 sys 0m0.253s 00:06:02.301 ************************************ 00:06:02.301 END TEST thread 00:06:02.301 ************************************ 00:06:02.301 19:59:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.301 19:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.301 19:59:09 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:02.301 19:59:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.301 19:59:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.301 19:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.301 ************************************ 00:06:02.301 START TEST accel 00:06:02.301 ************************************ 00:06:02.301 19:59:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:02.301 * Looking for test storage... 00:06:02.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:02.301 19:59:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.301 19:59:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.301 19:59:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:02.301 19:59:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:02.301 19:59:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:02.301 19:59:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:02.301 19:59:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:02.301 19:59:09 -- scripts/common.sh@335 -- # IFS=.-: 00:06:02.301 19:59:09 -- scripts/common.sh@335 -- # read -ra ver1 00:06:02.301 19:59:09 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.301 19:59:09 -- scripts/common.sh@336 -- # read -ra ver2 00:06:02.301 19:59:09 -- scripts/common.sh@337 -- # local 'op=<' 00:06:02.301 19:59:09 -- scripts/common.sh@339 -- # ver1_l=2 00:06:02.301 19:59:09 -- scripts/common.sh@340 -- # ver2_l=1 00:06:02.301 19:59:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:02.301 19:59:09 -- scripts/common.sh@343 -- # case "$op" in 00:06:02.301 19:59:09 -- scripts/common.sh@344 -- # : 1 00:06:02.301 19:59:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:02.301 19:59:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.301 19:59:09 -- scripts/common.sh@364 -- # decimal 1 00:06:02.301 19:59:09 -- scripts/common.sh@352 -- # local d=1 00:06:02.301 19:59:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.301 19:59:09 -- scripts/common.sh@354 -- # echo 1 00:06:02.301 19:59:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:02.301 19:59:09 -- scripts/common.sh@365 -- # decimal 2 00:06:02.301 19:59:09 -- scripts/common.sh@352 -- # local d=2 00:06:02.301 19:59:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.301 19:59:09 -- scripts/common.sh@354 -- # echo 2 00:06:02.301 19:59:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:02.301 19:59:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:02.301 19:59:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:02.301 19:59:09 -- scripts/common.sh@367 -- # return 0 00:06:02.301 19:59:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.301 19:59:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 19:59:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 19:59:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 19:59:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.301 --rc genhtml_branch_coverage=1 00:06:02.301 --rc genhtml_function_coverage=1 00:06:02.301 --rc genhtml_legend=1 00:06:02.301 --rc geninfo_all_blocks=1 00:06:02.301 --rc geninfo_unexecuted_blocks=1 00:06:02.301 00:06:02.301 ' 00:06:02.301 19:59:09 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:02.301 19:59:09 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:02.301 19:59:09 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.301 19:59:09 -- accel/accel.sh@59 -- # spdk_tgt_pid=58311 00:06:02.301 19:59:09 -- accel/accel.sh@60 -- # waitforlisten 58311 00:06:02.301 19:59:09 -- common/autotest_common.sh@829 -- # '[' -z 58311 ']' 00:06:02.301 19:59:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.301 19:59:09 -- accel/accel.sh@58 -- # build_accel_config 00:06:02.301 19:59:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.301 19:59:09 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:02.301 19:59:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.301 19:59:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.301 19:59:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.301 19:59:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.301 19:59:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.301 19:59:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.301 19:59:09 -- accel/accel.sh@42 -- # jq -r . 00:06:02.301 19:59:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.301 19:59:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.301 19:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.301 [2024-12-16 19:59:09.916262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.301 [2024-12-16 19:59:09.916482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58311 ] 00:06:02.561 [2024-12-16 19:59:10.063507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.821 [2024-12-16 19:59:10.233199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.821 [2024-12-16 19:59:10.233574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.203 19:59:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.203 19:59:11 -- common/autotest_common.sh@862 -- # return 0 00:06:04.203 19:59:11 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:04.203 19:59:11 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:04.203 19:59:11 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:04.203 19:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.203 19:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.203 19:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.203 19:59:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.203 19:59:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.203 19:59:11 -- accel/accel.sh@67 -- # killprocess 58311 00:06:04.203 19:59:11 -- common/autotest_common.sh@936 -- # '[' -z 58311 ']' 00:06:04.203 19:59:11 -- common/autotest_common.sh@940 -- # kill -0 58311 00:06:04.203 19:59:11 -- common/autotest_common.sh@941 -- # uname 00:06:04.203 19:59:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.203 19:59:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58311 00:06:04.203 killing process with pid 58311 00:06:04.203 19:59:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.203 19:59:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.203 19:59:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58311' 00:06:04.203 19:59:11 -- common/autotest_common.sh@955 -- # kill 58311 00:06:04.203 19:59:11 -- common/autotest_common.sh@960 -- # wait 58311 00:06:05.584 19:59:12 -- accel/accel.sh@68 -- # trap - ERR 00:06:05.584 19:59:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:05.584 19:59:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:05.585 19:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.585 19:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.585 19:59:12 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:05.585 19:59:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:05.585 19:59:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.585 19:59:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.585 19:59:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.585 19:59:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.585 19:59:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.585 19:59:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.585 19:59:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.585 19:59:12 -- accel/accel.sh@42 -- # jq -r . 00:06:05.585 19:59:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.585 19:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.585 19:59:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.585 19:59:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.585 19:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.585 19:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.585 ************************************ 00:06:05.585 START TEST accel_missing_filename 00:06:05.585 ************************************ 00:06:05.585 19:59:12 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:05.585 19:59:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.585 19:59:12 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:05.585 19:59:12 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:05.585 19:59:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.585 19:59:12 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:05.585 19:59:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.585 19:59:12 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:05.585 19:59:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:05.585 19:59:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.585 19:59:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.585 19:59:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.585 19:59:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.585 19:59:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.585 19:59:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.585 19:59:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.585 19:59:12 -- accel/accel.sh@42 -- # jq -r . 00:06:05.585 [2024-12-16 19:59:13.001326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.585 [2024-12-16 19:59:13.001427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:06:05.585 [2024-12-16 19:59:13.151650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.843 [2024-12-16 19:59:13.331251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.843 [2024-12-16 19:59:13.470282] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.451 [2024-12-16 19:59:13.798284] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:06.451 A filename is required. 00:06:06.451 19:59:14 -- common/autotest_common.sh@653 -- # es=234 00:06:06.451 19:59:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.451 19:59:14 -- common/autotest_common.sh@662 -- # es=106 00:06:06.451 19:59:14 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:06.451 19:59:14 -- common/autotest_common.sh@670 -- # es=1 00:06:06.451 19:59:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.451 00:06:06.451 real 0m1.096s 00:06:06.451 user 0m0.900s 00:06:06.451 sys 0m0.117s 00:06:06.451 ************************************ 00:06:06.451 END TEST accel_missing_filename 00:06:06.451 ************************************ 00:06:06.451 19:59:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.451 19:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.714 19:59:14 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.714 19:59:14 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:06.714 19:59:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.714 19:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.714 ************************************ 00:06:06.714 START TEST accel_compress_verify 00:06:06.714 ************************************ 00:06:06.714 19:59:14 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.714 19:59:14 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.714 19:59:14 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.714 19:59:14 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:06.714 19:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.714 19:59:14 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:06.714 19:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.714 19:59:14 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.714 19:59:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.714 19:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.714 19:59:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.714 19:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.714 19:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.714 19:59:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.714 19:59:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.714 19:59:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.714 19:59:14 -- accel/accel.sh@42 -- # jq -r . 00:06:06.714 [2024-12-16 19:59:14.137430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.714 [2024-12-16 19:59:14.137533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58420 ] 00:06:06.714 [2024-12-16 19:59:14.278386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.972 [2024-12-16 19:59:14.457579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.972 [2024-12-16 19:59:14.597717] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.537 [2024-12-16 19:59:14.925171] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:07.537 00:06:07.537 Compression does not support the verify option, aborting. 00:06:07.794 19:59:15 -- common/autotest_common.sh@653 -- # es=161 00:06:07.794 19:59:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.794 19:59:15 -- common/autotest_common.sh@662 -- # es=33 00:06:07.794 19:59:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:07.794 19:59:15 -- common/autotest_common.sh@670 -- # es=1 00:06:07.794 19:59:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.794 00:06:07.794 real 0m1.091s 00:06:07.794 user 0m0.890s 00:06:07.794 sys 0m0.124s 00:06:07.794 ************************************ 00:06:07.794 END TEST accel_compress_verify 00:06:07.794 ************************************ 00:06:07.794 19:59:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.794 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.794 19:59:15 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:07.794 19:59:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:07.794 19:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.794 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.794 ************************************ 00:06:07.794 START TEST accel_wrong_workload 00:06:07.794 ************************************ 00:06:07.794 19:59:15 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:07.794 19:59:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:07.794 19:59:15 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:07.794 19:59:15 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:07.794 19:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.794 19:59:15 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:07.794 19:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.794 19:59:15 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:07.794 19:59:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:07.794 19:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.794 19:59:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.794 19:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.794 19:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.794 19:59:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.794 19:59:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.794 19:59:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.794 19:59:15 -- accel/accel.sh@42 -- # jq -r . 00:06:07.794 Unsupported workload type: foobar 00:06:07.794 [2024-12-16 19:59:15.260151] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:07.794 accel_perf options: 00:06:07.794 [-h help message] 00:06:07.794 [-q queue depth per core] 00:06:07.794 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.794 [-T number of threads per core 00:06:07.794 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.794 [-t time in seconds] 00:06:07.794 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.794 [ dif_verify, , dif_generate, dif_generate_copy 00:06:07.794 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.794 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.794 [-S for crc32c workload, use this seed value (default 0) 00:06:07.794 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.794 [-f for fill workload, use this BYTE value (default 255) 00:06:07.794 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.794 [-y verify result if this switch is on] 00:06:07.794 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.794 Can be used to spread operations across a wider range of memory. 00:06:07.794 19:59:15 -- common/autotest_common.sh@653 -- # es=1 00:06:07.794 19:59:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.794 19:59:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.794 19:59:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.794 00:06:07.794 real 0m0.053s 00:06:07.794 user 0m0.054s 00:06:07.794 sys 0m0.029s 00:06:07.794 19:59:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.794 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.795 ************************************ 00:06:07.795 END TEST accel_wrong_workload 00:06:07.795 ************************************ 00:06:07.795 19:59:15 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.795 19:59:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:07.795 19:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.795 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.795 ************************************ 00:06:07.795 START TEST accel_negative_buffers 00:06:07.795 ************************************ 00:06:07.795 19:59:15 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.795 19:59:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:07.795 19:59:15 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:07.795 19:59:15 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:07.795 19:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.795 19:59:15 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:07.795 19:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.795 19:59:15 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:07.795 19:59:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:07.795 19:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.795 19:59:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.795 19:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.795 19:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.795 19:59:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.795 19:59:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.795 19:59:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.795 19:59:15 -- accel/accel.sh@42 -- # jq -r . 00:06:07.795 -x option must be non-negative. 00:06:07.795 [2024-12-16 19:59:15.349199] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:07.795 accel_perf options: 00:06:07.795 [-h help message] 00:06:07.795 [-q queue depth per core] 00:06:07.795 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.795 [-T number of threads per core 00:06:07.795 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.795 [-t time in seconds] 00:06:07.795 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.795 [ dif_verify, , dif_generate, dif_generate_copy 00:06:07.795 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.795 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.795 [-S for crc32c workload, use this seed value (default 0) 00:06:07.795 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.795 [-f for fill workload, use this BYTE value (default 255) 00:06:07.795 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.795 [-y verify result if this switch is on] 00:06:07.795 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.795 Can be used to spread operations across a wider range of memory. 00:06:07.795 19:59:15 -- common/autotest_common.sh@653 -- # es=1 00:06:07.795 19:59:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.795 19:59:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.795 19:59:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.795 00:06:07.795 real 0m0.052s 00:06:07.795 user 0m0.047s 00:06:07.795 sys 0m0.031s 00:06:07.795 19:59:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.795 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.795 ************************************ 00:06:07.795 END TEST accel_negative_buffers 00:06:07.795 ************************************ 00:06:07.795 19:59:15 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:07.795 19:59:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:07.795 19:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.795 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.795 ************************************ 00:06:07.795 START TEST accel_crc32c 00:06:07.795 ************************************ 00:06:07.795 19:59:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:07.795 19:59:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.795 19:59:15 -- accel/accel.sh@17 -- # local accel_module 00:06:07.795 19:59:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:07.795 19:59:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:07.795 19:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.795 19:59:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.795 19:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.795 19:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.795 19:59:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.795 19:59:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.795 19:59:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.795 19:59:15 -- accel/accel.sh@42 -- # jq -r . 00:06:08.052 [2024-12-16 19:59:15.441780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.052 [2024-12-16 19:59:15.441997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:06:08.052 [2024-12-16 19:59:15.589741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.309 [2024-12-16 19:59:15.770456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.207 19:59:17 -- accel/accel.sh@18 -- # out=' 00:06:10.207 SPDK Configuration: 00:06:10.207 Core mask: 0x1 00:06:10.207 00:06:10.207 Accel Perf Configuration: 00:06:10.207 Workload Type: crc32c 00:06:10.207 CRC-32C seed: 32 00:06:10.207 Transfer size: 4096 bytes 00:06:10.207 Vector count 1 00:06:10.207 Module: software 00:06:10.207 Queue depth: 32 00:06:10.207 Allocate depth: 32 00:06:10.207 # threads/core: 1 00:06:10.207 Run time: 1 seconds 00:06:10.207 Verify: Yes 00:06:10.207 00:06:10.207 Running for 1 seconds... 00:06:10.207 00:06:10.207 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.207 ------------------------------------------------------------------------------------ 00:06:10.207 0,0 460096/s 1797 MiB/s 0 0 00:06:10.207 ==================================================================================== 00:06:10.207 Total 460096/s 1797 MiB/s 0 0' 00:06:10.207 19:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:10.207 19:59:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.207 19:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:10.207 19:59:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.207 19:59:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.207 19:59:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.207 19:59:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.207 19:59:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.207 19:59:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.207 19:59:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.207 19:59:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.207 19:59:17 -- accel/accel.sh@42 -- # jq -r . 00:06:10.207 [2024-12-16 19:59:17.541470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.207 [2024-12-16 19:59:17.541585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58513 ] 00:06:10.207 [2024-12-16 19:59:17.689053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.465 [2024-12-16 19:59:17.870166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=0x1 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=crc32c 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=32 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=software 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=32 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=32 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=1 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.465 19:59:18 -- accel/accel.sh@21 -- # val=Yes 00:06:10.465 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.465 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.466 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.466 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.466 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.466 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.466 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.466 19:59:18 -- accel/accel.sh@21 -- # val= 00:06:10.466 19:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.466 19:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.466 19:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.838 19:59:19 -- accel/accel.sh@21 -- # val= 00:06:11.838 19:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # IFS=: 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # read -r var val 00:06:11.838 19:59:19 -- accel/accel.sh@21 -- # val= 00:06:11.838 19:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # IFS=: 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # read -r var val 00:06:11.838 19:59:19 -- accel/accel.sh@21 -- # val= 00:06:11.838 19:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # IFS=: 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # read -r var val 00:06:11.838 19:59:19 -- accel/accel.sh@21 -- # val= 00:06:11.838 19:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # IFS=: 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # read -r var val 00:06:11.838 19:59:19 -- accel/accel.sh@21 -- # val= 00:06:11.838 19:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # IFS=: 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # read -r var val 00:06:11.838 19:59:19 -- accel/accel.sh@21 -- # val= 00:06:11.838 19:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # IFS=: 00:06:11.838 19:59:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.096 19:59:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.096 19:59:19 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:12.096 19:59:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.096 00:06:12.096 real 0m4.079s 00:06:12.096 user 0m3.654s 00:06:12.096 sys 0m0.220s 00:06:12.096 19:59:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.096 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.096 ************************************ 00:06:12.096 END TEST accel_crc32c 00:06:12.096 ************************************ 00:06:12.096 19:59:19 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:12.096 19:59:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:12.096 19:59:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.096 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.096 ************************************ 00:06:12.096 START TEST accel_crc32c_C2 00:06:12.096 ************************************ 00:06:12.096 19:59:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:12.096 19:59:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.096 19:59:19 -- accel/accel.sh@17 -- # local accel_module 00:06:12.096 19:59:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.096 19:59:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.096 19:59:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.096 19:59:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.096 19:59:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.096 19:59:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.096 19:59:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.096 19:59:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.096 19:59:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.096 19:59:19 -- accel/accel.sh@42 -- # jq -r . 00:06:12.096 [2024-12-16 19:59:19.567493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.096 [2024-12-16 19:59:19.567606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58554 ] 00:06:12.096 [2024-12-16 19:59:19.723021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.354 [2024-12-16 19:59:19.863384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.252 19:59:21 -- accel/accel.sh@18 -- # out=' 00:06:14.252 SPDK Configuration: 00:06:14.252 Core mask: 0x1 00:06:14.252 00:06:14.252 Accel Perf Configuration: 00:06:14.252 Workload Type: crc32c 00:06:14.252 CRC-32C seed: 0 00:06:14.252 Transfer size: 4096 bytes 00:06:14.252 Vector count 2 00:06:14.252 Module: software 00:06:14.252 Queue depth: 32 00:06:14.252 Allocate depth: 32 00:06:14.252 # threads/core: 1 00:06:14.252 Run time: 1 seconds 00:06:14.252 Verify: Yes 00:06:14.252 00:06:14.252 Running for 1 seconds... 00:06:14.252 00:06:14.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.252 ------------------------------------------------------------------------------------ 00:06:14.252 0,0 509664/s 3981 MiB/s 0 0 00:06:14.252 ==================================================================================== 00:06:14.252 Total 509664/s 1990 MiB/s 0 0' 00:06:14.252 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.252 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.252 19:59:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:14.252 19:59:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:14.252 19:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.252 19:59:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.252 19:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.252 19:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.252 19:59:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.252 19:59:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.252 19:59:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.252 19:59:21 -- accel/accel.sh@42 -- # jq -r . 00:06:14.252 [2024-12-16 19:59:21.485517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.252 [2024-12-16 19:59:21.485620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58580 ] 00:06:14.252 [2024-12-16 19:59:21.632667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.252 [2024-12-16 19:59:21.777686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=0x1 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=crc32c 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=0 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=software 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=32 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=32 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=1 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val=Yes 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:14.511 19:59:21 -- accel/accel.sh@21 -- # val= 00:06:14.511 19:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:14.511 19:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@21 -- # val= 00:06:15.884 19:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@21 -- # val= 00:06:15.884 19:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@21 -- # val= 00:06:15.884 19:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@21 -- # val= 00:06:15.884 19:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@21 -- # val= 00:06:15.884 19:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@21 -- # val= 00:06:15.884 19:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:15.884 19:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:15.884 19:59:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.884 19:59:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:15.884 19:59:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.884 00:06:15.884 real 0m3.829s 00:06:15.884 user 0m3.395s 00:06:15.884 sys 0m0.229s 00:06:15.884 19:59:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.884 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.884 ************************************ 00:06:15.884 END TEST accel_crc32c_C2 00:06:15.884 ************************************ 00:06:15.884 19:59:23 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:15.884 19:59:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:15.884 19:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.884 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.884 ************************************ 00:06:15.884 START TEST accel_copy 00:06:15.884 ************************************ 00:06:15.884 19:59:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:15.884 19:59:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.884 19:59:23 -- accel/accel.sh@17 -- # local accel_module 00:06:15.884 19:59:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:15.884 19:59:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.884 19:59:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.884 19:59:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.884 19:59:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.884 19:59:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.884 19:59:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.884 19:59:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.884 19:59:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.884 19:59:23 -- accel/accel.sh@42 -- # jq -r . 00:06:15.884 [2024-12-16 19:59:23.444875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.884 [2024-12-16 19:59:23.444955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58621 ] 00:06:16.142 [2024-12-16 19:59:23.579477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.142 [2024-12-16 19:59:23.722797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.042 19:59:25 -- accel/accel.sh@18 -- # out=' 00:06:18.042 SPDK Configuration: 00:06:18.042 Core mask: 0x1 00:06:18.042 00:06:18.042 Accel Perf Configuration: 00:06:18.042 Workload Type: copy 00:06:18.042 Transfer size: 4096 bytes 00:06:18.042 Vector count 1 00:06:18.042 Module: software 00:06:18.042 Queue depth: 32 00:06:18.042 Allocate depth: 32 00:06:18.042 # threads/core: 1 00:06:18.042 Run time: 1 seconds 00:06:18.042 Verify: Yes 00:06:18.042 00:06:18.042 Running for 1 seconds... 00:06:18.042 00:06:18.042 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.042 ------------------------------------------------------------------------------------ 00:06:18.042 0,0 374752/s 1463 MiB/s 0 0 00:06:18.042 ==================================================================================== 00:06:18.042 Total 374752/s 1463 MiB/s 0 0' 00:06:18.042 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.042 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.042 19:59:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:18.042 19:59:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:18.042 19:59:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.042 19:59:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.042 19:59:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.042 19:59:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.042 19:59:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.042 19:59:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.042 19:59:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.042 19:59:25 -- accel/accel.sh@42 -- # jq -r . 00:06:18.042 [2024-12-16 19:59:25.337080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.042 [2024-12-16 19:59:25.337187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:06:18.042 [2024-12-16 19:59:25.485464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.042 [2024-12-16 19:59:25.622693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.300 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.300 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.300 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.300 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.300 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=0x1 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=copy 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=software 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=32 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=32 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=1 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val=Yes 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.301 19:59:25 -- accel/accel.sh@21 -- # val= 00:06:18.301 19:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.301 19:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@21 -- # val= 00:06:19.684 19:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@21 -- # val= 00:06:19.684 19:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@21 -- # val= 00:06:19.684 19:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@21 -- # val= 00:06:19.684 19:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@21 -- # val= 00:06:19.684 19:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@21 -- # val= 00:06:19.684 19:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.684 19:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.684 19:59:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.684 19:59:27 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:19.684 19:59:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.684 00:06:19.684 real 0m3.789s 00:06:19.684 user 0m3.380s 00:06:19.684 sys 0m0.208s 00:06:19.684 ************************************ 00:06:19.684 END TEST accel_copy 00:06:19.684 ************************************ 00:06:19.684 19:59:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.684 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.684 19:59:27 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.684 19:59:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:19.684 19:59:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.684 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.684 ************************************ 00:06:19.684 START TEST accel_fill 00:06:19.684 ************************************ 00:06:19.684 19:59:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.684 19:59:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.684 19:59:27 -- accel/accel.sh@17 -- # local accel_module 00:06:19.684 19:59:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.684 19:59:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.684 19:59:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.684 19:59:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.684 19:59:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.684 19:59:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.684 19:59:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.684 19:59:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.684 19:59:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.684 19:59:27 -- accel/accel.sh@42 -- # jq -r . 00:06:19.684 [2024-12-16 19:59:27.283688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.684 [2024-12-16 19:59:27.283794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58683 ] 00:06:19.953 [2024-12-16 19:59:27.430826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.953 [2024-12-16 19:59:27.580376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.853 19:59:29 -- accel/accel.sh@18 -- # out=' 00:06:21.853 SPDK Configuration: 00:06:21.853 Core mask: 0x1 00:06:21.853 00:06:21.853 Accel Perf Configuration: 00:06:21.853 Workload Type: fill 00:06:21.853 Fill pattern: 0x80 00:06:21.853 Transfer size: 4096 bytes 00:06:21.853 Vector count 1 00:06:21.853 Module: software 00:06:21.853 Queue depth: 64 00:06:21.853 Allocate depth: 64 00:06:21.853 # threads/core: 1 00:06:21.853 Run time: 1 seconds 00:06:21.853 Verify: Yes 00:06:21.853 00:06:21.853 Running for 1 seconds... 00:06:21.853 00:06:21.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.853 ------------------------------------------------------------------------------------ 00:06:21.853 0,0 593152/s 2317 MiB/s 0 0 00:06:21.853 ==================================================================================== 00:06:21.853 Total 593152/s 2317 MiB/s 0 0' 00:06:21.853 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:21.853 19:59:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.853 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:21.853 19:59:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.853 19:59:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.853 19:59:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.853 19:59:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.853 19:59:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.853 19:59:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.853 19:59:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.853 19:59:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.853 19:59:29 -- accel/accel.sh@42 -- # jq -r . 00:06:21.853 [2024-12-16 19:59:29.201724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.853 [2024-12-16 19:59:29.201839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58703 ] 00:06:21.853 [2024-12-16 19:59:29.349936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.853 [2024-12-16 19:59:29.487195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val=0x1 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val=fill 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val=0x80 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.111 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.111 19:59:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.111 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val=software 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val=64 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val=64 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val=1 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val=Yes 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.112 19:59:29 -- accel/accel.sh@21 -- # val= 00:06:22.112 19:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.112 19:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@21 -- # val= 00:06:23.486 19:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@21 -- # val= 00:06:23.486 19:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@21 -- # val= 00:06:23.486 19:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@21 -- # val= 00:06:23.486 19:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@21 -- # val= 00:06:23.486 19:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@21 -- # val= 00:06:23.486 19:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.486 19:59:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.486 19:59:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.486 19:59:31 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:23.486 19:59:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.486 00:06:23.486 real 0m3.824s 00:06:23.486 user 0m3.397s 00:06:23.486 sys 0m0.224s 00:06:23.486 19:59:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.486 19:59:31 -- common/autotest_common.sh@10 -- # set +x 00:06:23.486 ************************************ 00:06:23.486 END TEST accel_fill 00:06:23.486 ************************************ 00:06:23.486 19:59:31 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:23.486 19:59:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:23.486 19:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.486 19:59:31 -- common/autotest_common.sh@10 -- # set +x 00:06:23.486 ************************************ 00:06:23.486 START TEST accel_copy_crc32c 00:06:23.486 ************************************ 00:06:23.486 19:59:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:23.486 19:59:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.486 19:59:31 -- accel/accel.sh@17 -- # local accel_module 00:06:23.486 19:59:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:23.486 19:59:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:23.486 19:59:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.486 19:59:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.487 19:59:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.487 19:59:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.487 19:59:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.487 19:59:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.487 19:59:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.487 19:59:31 -- accel/accel.sh@42 -- # jq -r . 00:06:23.744 [2024-12-16 19:59:31.144917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.745 [2024-12-16 19:59:31.145021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58744 ] 00:06:23.745 [2024-12-16 19:59:31.291408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.002 [2024-12-16 19:59:31.435686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.903 19:59:33 -- accel/accel.sh@18 -- # out=' 00:06:25.903 SPDK Configuration: 00:06:25.903 Core mask: 0x1 00:06:25.903 00:06:25.903 Accel Perf Configuration: 00:06:25.903 Workload Type: copy_crc32c 00:06:25.903 CRC-32C seed: 0 00:06:25.903 Vector size: 4096 bytes 00:06:25.903 Transfer size: 4096 bytes 00:06:25.903 Vector count 1 00:06:25.903 Module: software 00:06:25.903 Queue depth: 32 00:06:25.903 Allocate depth: 32 00:06:25.903 # threads/core: 1 00:06:25.903 Run time: 1 seconds 00:06:25.903 Verify: Yes 00:06:25.903 00:06:25.903 Running for 1 seconds... 00:06:25.903 00:06:25.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.903 ------------------------------------------------------------------------------------ 00:06:25.903 0,0 310976/s 1214 MiB/s 0 0 00:06:25.903 ==================================================================================== 00:06:25.903 Total 310976/s 1214 MiB/s 0 0' 00:06:25.903 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.903 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.903 19:59:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:25.903 19:59:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:25.903 19:59:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.903 19:59:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.903 19:59:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.903 19:59:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.903 19:59:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.904 19:59:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.904 19:59:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.904 19:59:33 -- accel/accel.sh@42 -- # jq -r . 00:06:25.904 [2024-12-16 19:59:33.057358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.904 [2024-12-16 19:59:33.057461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58770 ] 00:06:25.904 [2024-12-16 19:59:33.202829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.904 [2024-12-16 19:59:33.354423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=0x1 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=0 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=software 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=32 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=32 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=1 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val=Yes 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.904 19:59:33 -- accel/accel.sh@21 -- # val= 00:06:25.904 19:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.904 19:59:33 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@21 -- # val= 00:06:27.801 19:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # IFS=: 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@21 -- # val= 00:06:27.801 19:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # IFS=: 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@21 -- # val= 00:06:27.801 19:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # IFS=: 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@21 -- # val= 00:06:27.801 19:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # IFS=: 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@21 -- # val= 00:06:27.801 19:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # IFS=: 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@21 -- # val= 00:06:27.801 19:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # IFS=: 00:06:27.801 19:59:34 -- accel/accel.sh@20 -- # read -r var val 00:06:27.801 19:59:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.801 19:59:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:27.801 ************************************ 00:06:27.801 END TEST accel_copy_crc32c 00:06:27.801 ************************************ 00:06:27.801 19:59:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.801 00:06:27.801 real 0m3.843s 00:06:27.801 user 0m3.395s 00:06:27.801 sys 0m0.245s 00:06:27.801 19:59:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.801 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.801 19:59:35 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.801 19:59:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.801 19:59:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.801 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:06:27.801 ************************************ 00:06:27.801 START TEST accel_copy_crc32c_C2 00:06:27.801 ************************************ 00:06:27.801 19:59:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.801 19:59:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.801 19:59:35 -- accel/accel.sh@17 -- # local accel_module 00:06:27.801 19:59:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:27.801 19:59:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:27.801 19:59:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.801 19:59:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.801 19:59:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.801 19:59:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.801 19:59:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.801 19:59:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.801 19:59:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.801 19:59:35 -- accel/accel.sh@42 -- # jq -r . 00:06:27.801 [2024-12-16 19:59:35.050871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.801 [2024-12-16 19:59:35.050980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:06:27.801 [2024-12-16 19:59:35.197656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.801 [2024-12-16 19:59:35.377873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.701 19:59:37 -- accel/accel.sh@18 -- # out=' 00:06:29.701 SPDK Configuration: 00:06:29.701 Core mask: 0x1 00:06:29.701 00:06:29.701 Accel Perf Configuration: 00:06:29.701 Workload Type: copy_crc32c 00:06:29.701 CRC-32C seed: 0 00:06:29.701 Vector size: 4096 bytes 00:06:29.701 Transfer size: 8192 bytes 00:06:29.701 Vector count 2 00:06:29.701 Module: software 00:06:29.701 Queue depth: 32 00:06:29.701 Allocate depth: 32 00:06:29.701 # threads/core: 1 00:06:29.701 Run time: 1 seconds 00:06:29.701 Verify: Yes 00:06:29.701 00:06:29.701 Running for 1 seconds... 00:06:29.701 00:06:29.701 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.701 ------------------------------------------------------------------------------------ 00:06:29.701 0,0 177664/s 1388 MiB/s 0 0 00:06:29.701 ==================================================================================== 00:06:29.701 Total 177664/s 694 MiB/s 0 0' 00:06:29.701 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:29.701 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:29.701 19:59:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.701 19:59:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.701 19:59:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.701 19:59:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.701 19:59:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.701 19:59:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.701 19:59:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.701 19:59:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.701 19:59:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.701 19:59:37 -- accel/accel.sh@42 -- # jq -r . 00:06:29.701 [2024-12-16 19:59:37.155312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.701 [2024-12-16 19:59:37.155417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:06:29.701 [2024-12-16 19:59:37.304790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.960 [2024-12-16 19:59:37.487180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.218 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.218 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.218 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 19:59:37 -- accel/accel.sh@21 -- # val=0x1 00:06:30.218 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.218 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.218 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=0 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=software 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=32 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=32 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=1 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val=Yes 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:30.219 19:59:37 -- accel/accel.sh@21 -- # val= 00:06:30.219 19:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # IFS=: 00:06:30.219 19:59:37 -- accel/accel.sh@20 -- # read -r var val 00:06:31.593 19:59:39 -- accel/accel.sh@21 -- # val= 00:06:31.593 19:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.593 19:59:39 -- accel/accel.sh@21 -- # val= 00:06:31.593 19:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.593 19:59:39 -- accel/accel.sh@21 -- # val= 00:06:31.593 19:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.593 19:59:39 -- accel/accel.sh@21 -- # val= 00:06:31.593 19:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.593 19:59:39 -- accel/accel.sh@21 -- # val= 00:06:31.593 19:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.593 19:59:39 -- accel/accel.sh@21 -- # val= 00:06:31.593 19:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.593 19:59:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.889 ************************************ 00:06:31.889 END TEST accel_copy_crc32c_C2 00:06:31.889 ************************************ 00:06:31.889 19:59:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.889 19:59:39 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:31.889 19:59:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.889 00:06:31.889 real 0m4.222s 00:06:31.889 user 0m3.752s 00:06:31.889 sys 0m0.259s 00:06:31.889 19:59:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.889 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.889 19:59:39 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:31.889 19:59:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.889 19:59:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.889 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.889 ************************************ 00:06:31.889 START TEST accel_dualcast 00:06:31.889 ************************************ 00:06:31.889 19:59:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:31.889 19:59:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.889 19:59:39 -- accel/accel.sh@17 -- # local accel_module 00:06:31.889 19:59:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:31.889 19:59:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:31.889 19:59:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.889 19:59:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.889 19:59:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.889 19:59:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.889 19:59:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.889 19:59:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.889 19:59:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.889 19:59:39 -- accel/accel.sh@42 -- # jq -r . 00:06:31.889 [2024-12-16 19:59:39.320077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.889 [2024-12-16 19:59:39.320180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:06:31.889 [2024-12-16 19:59:39.468710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.173 [2024-12-16 19:59:39.642380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.072 19:59:41 -- accel/accel.sh@18 -- # out=' 00:06:34.072 SPDK Configuration: 00:06:34.072 Core mask: 0x1 00:06:34.072 00:06:34.072 Accel Perf Configuration: 00:06:34.072 Workload Type: dualcast 00:06:34.072 Transfer size: 4096 bytes 00:06:34.072 Vector count 1 00:06:34.072 Module: software 00:06:34.072 Queue depth: 32 00:06:34.072 Allocate depth: 32 00:06:34.072 # threads/core: 1 00:06:34.072 Run time: 1 seconds 00:06:34.072 Verify: Yes 00:06:34.072 00:06:34.072 Running for 1 seconds... 00:06:34.072 00:06:34.072 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.072 ------------------------------------------------------------------------------------ 00:06:34.072 0,0 335264/s 1309 MiB/s 0 0 00:06:34.072 ==================================================================================== 00:06:34.072 Total 335264/s 1309 MiB/s 0 0' 00:06:34.072 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.072 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.073 19:59:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:34.073 19:59:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:34.073 19:59:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.073 19:59:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.073 19:59:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.073 19:59:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.073 19:59:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.073 19:59:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.073 19:59:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.073 19:59:41 -- accel/accel.sh@42 -- # jq -r . 00:06:34.073 [2024-12-16 19:59:41.433393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.073 [2024-12-16 19:59:41.433498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58904 ] 00:06:34.073 [2024-12-16 19:59:41.577927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.331 [2024-12-16 19:59:41.753762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=0x1 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=dualcast 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=software 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=32 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=32 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=1 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val=Yes 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.331 19:59:41 -- accel/accel.sh@21 -- # val= 00:06:34.331 19:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.331 19:59:41 -- accel/accel.sh@20 -- # read -r var val 00:06:36.231 19:59:43 -- accel/accel.sh@21 -- # val= 00:06:36.232 19:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.232 19:59:43 -- accel/accel.sh@21 -- # val= 00:06:36.232 19:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.232 19:59:43 -- accel/accel.sh@21 -- # val= 00:06:36.232 19:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.232 19:59:43 -- accel/accel.sh@21 -- # val= 00:06:36.232 19:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.232 19:59:43 -- accel/accel.sh@21 -- # val= 00:06:36.232 19:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.232 19:59:43 -- accel/accel.sh@21 -- # val= 00:06:36.232 19:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.232 19:59:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.232 19:59:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.232 19:59:43 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:36.232 19:59:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.232 00:06:36.232 real 0m4.215s 00:06:36.232 user 0m3.784s 00:06:36.232 sys 0m0.222s 00:06:36.232 19:59:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.232 ************************************ 00:06:36.232 END TEST accel_dualcast 00:06:36.232 ************************************ 00:06:36.232 19:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:36.232 19:59:43 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:36.232 19:59:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.232 19:59:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.232 19:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:36.232 ************************************ 00:06:36.232 START TEST accel_compare 00:06:36.232 ************************************ 00:06:36.232 19:59:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:36.232 19:59:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.232 19:59:43 -- accel/accel.sh@17 -- # local accel_module 00:06:36.232 19:59:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:36.232 19:59:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:36.232 19:59:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.232 19:59:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.232 19:59:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.232 19:59:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.232 19:59:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.232 19:59:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.232 19:59:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.232 19:59:43 -- accel/accel.sh@42 -- # jq -r . 00:06:36.232 [2024-12-16 19:59:43.584672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.232 [2024-12-16 19:59:43.584775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58951 ] 00:06:36.232 [2024-12-16 19:59:43.732398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.490 [2024-12-16 19:59:43.906643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.388 19:59:45 -- accel/accel.sh@18 -- # out=' 00:06:38.388 SPDK Configuration: 00:06:38.388 Core mask: 0x1 00:06:38.388 00:06:38.388 Accel Perf Configuration: 00:06:38.388 Workload Type: compare 00:06:38.388 Transfer size: 4096 bytes 00:06:38.388 Vector count 1 00:06:38.388 Module: software 00:06:38.388 Queue depth: 32 00:06:38.388 Allocate depth: 32 00:06:38.388 # threads/core: 1 00:06:38.388 Run time: 1 seconds 00:06:38.388 Verify: Yes 00:06:38.388 00:06:38.388 Running for 1 seconds... 00:06:38.388 00:06:38.388 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.388 ------------------------------------------------------------------------------------ 00:06:38.388 0,0 426688/s 1666 MiB/s 0 0 00:06:38.388 ==================================================================================== 00:06:38.388 Total 426688/s 1666 MiB/s 0 0' 00:06:38.388 19:59:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.388 19:59:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.388 19:59:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:38.388 19:59:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:38.388 19:59:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.388 19:59:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.388 19:59:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.388 19:59:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.388 19:59:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.388 19:59:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.388 19:59:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.388 19:59:45 -- accel/accel.sh@42 -- # jq -r . 00:06:38.388 [2024-12-16 19:59:45.670392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.388 [2024-12-16 19:59:45.670492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:06:38.388 [2024-12-16 19:59:45.819931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.388 [2024-12-16 19:59:45.988217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=0x1 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=compare 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=software 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=32 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=32 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=1 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val=Yes 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.647 19:59:46 -- accel/accel.sh@21 -- # val= 00:06:38.647 19:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.647 19:59:46 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@21 -- # val= 00:06:40.022 19:59:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@21 -- # val= 00:06:40.022 19:59:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@21 -- # val= 00:06:40.022 19:59:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@21 -- # val= 00:06:40.022 19:59:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@21 -- # val= 00:06:40.022 19:59:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@21 -- # val= 00:06:40.022 19:59:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.022 19:59:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.022 19:59:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.022 19:59:47 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:40.022 ************************************ 00:06:40.022 END TEST accel_compare 00:06:40.022 ************************************ 00:06:40.022 19:59:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.022 00:06:40.022 real 0m4.044s 00:06:40.022 user 0m3.617s 00:06:40.022 sys 0m0.219s 00:06:40.022 19:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.022 19:59:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.022 19:59:47 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:40.022 19:59:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.022 19:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.022 19:59:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.022 ************************************ 00:06:40.022 START TEST accel_xor 00:06:40.022 ************************************ 00:06:40.022 19:59:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:40.022 19:59:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.022 19:59:47 -- accel/accel.sh@17 -- # local accel_module 00:06:40.022 19:59:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:40.022 19:59:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:40.022 19:59:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.022 19:59:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.022 19:59:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.022 19:59:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.022 19:59:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.022 19:59:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.022 19:59:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.022 19:59:47 -- accel/accel.sh@42 -- # jq -r . 00:06:40.280 [2024-12-16 19:59:47.663505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.280 [2024-12-16 19:59:47.663598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59018 ] 00:06:40.280 [2024-12-16 19:59:47.804035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.538 [2024-12-16 19:59:47.974573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.436 19:59:49 -- accel/accel.sh@18 -- # out=' 00:06:42.436 SPDK Configuration: 00:06:42.436 Core mask: 0x1 00:06:42.436 00:06:42.436 Accel Perf Configuration: 00:06:42.436 Workload Type: xor 00:06:42.436 Source buffers: 2 00:06:42.436 Transfer size: 4096 bytes 00:06:42.436 Vector count 1 00:06:42.436 Module: software 00:06:42.436 Queue depth: 32 00:06:42.436 Allocate depth: 32 00:06:42.436 # threads/core: 1 00:06:42.436 Run time: 1 seconds 00:06:42.436 Verify: Yes 00:06:42.437 00:06:42.437 Running for 1 seconds... 00:06:42.437 00:06:42.437 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.437 ------------------------------------------------------------------------------------ 00:06:42.437 0,0 342560/s 1338 MiB/s 0 0 00:06:42.437 ==================================================================================== 00:06:42.437 Total 342560/s 1338 MiB/s 0 0' 00:06:42.437 19:59:49 -- accel/accel.sh@20 -- # IFS=: 00:06:42.437 19:59:49 -- accel/accel.sh@20 -- # read -r var val 00:06:42.437 19:59:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:42.437 19:59:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:42.437 19:59:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.437 19:59:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.437 19:59:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.437 19:59:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.437 19:59:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.437 19:59:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.437 19:59:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.437 19:59:49 -- accel/accel.sh@42 -- # jq -r . 00:06:42.437 [2024-12-16 19:59:49.733577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.437 [2024-12-16 19:59:49.733800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59046 ] 00:06:42.437 [2024-12-16 19:59:49.883325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.437 [2024-12-16 19:59:50.050897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=0x1 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=xor 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=2 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=software 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=32 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=32 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=1 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val=Yes 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.695 19:59:50 -- accel/accel.sh@21 -- # val= 00:06:42.695 19:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.695 19:59:50 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@21 -- # val= 00:06:44.069 19:59:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@21 -- # val= 00:06:44.069 19:59:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@21 -- # val= 00:06:44.069 19:59:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@21 -- # val= 00:06:44.069 19:59:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@21 -- # val= 00:06:44.069 19:59:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@21 -- # val= 00:06:44.069 19:59:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.069 19:59:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.069 19:59:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.069 ************************************ 00:06:44.069 END TEST accel_xor 00:06:44.069 ************************************ 00:06:44.069 19:59:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:44.069 19:59:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.069 00:06:44.069 real 0m4.023s 00:06:44.069 user 0m3.583s 00:06:44.069 sys 0m0.234s 00:06:44.070 19:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.070 19:59:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.070 19:59:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:44.070 19:59:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:44.070 19:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.070 19:59:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.070 ************************************ 00:06:44.070 START TEST accel_xor 00:06:44.070 ************************************ 00:06:44.070 19:59:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:44.070 19:59:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.070 19:59:51 -- accel/accel.sh@17 -- # local accel_module 00:06:44.070 19:59:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:44.070 19:59:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:44.070 19:59:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.070 19:59:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.070 19:59:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.070 19:59:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.070 19:59:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.070 19:59:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.070 19:59:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.070 19:59:51 -- accel/accel.sh@42 -- # jq -r . 00:06:44.328 [2024-12-16 19:59:51.714784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.328 [2024-12-16 19:59:51.714884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:06:44.328 [2024-12-16 19:59:51.862101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.586 [2024-12-16 19:59:51.998324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.959 19:59:53 -- accel/accel.sh@18 -- # out=' 00:06:45.959 SPDK Configuration: 00:06:45.959 Core mask: 0x1 00:06:45.959 00:06:45.959 Accel Perf Configuration: 00:06:45.959 Workload Type: xor 00:06:45.959 Source buffers: 3 00:06:45.959 Transfer size: 4096 bytes 00:06:45.959 Vector count 1 00:06:45.959 Module: software 00:06:45.959 Queue depth: 32 00:06:45.959 Allocate depth: 32 00:06:45.959 # threads/core: 1 00:06:45.959 Run time: 1 seconds 00:06:45.959 Verify: Yes 00:06:45.959 00:06:45.959 Running for 1 seconds... 00:06:45.959 00:06:45.959 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.959 ------------------------------------------------------------------------------------ 00:06:45.959 0,0 424768/s 1659 MiB/s 0 0 00:06:45.959 ==================================================================================== 00:06:45.959 Total 424768/s 1659 MiB/s 0 0' 00:06:45.959 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:45.959 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:45.959 19:59:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:45.959 19:59:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:45.959 19:59:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.959 19:59:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.959 19:59:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.959 19:59:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.959 19:59:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.959 19:59:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.959 19:59:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.959 19:59:53 -- accel/accel.sh@42 -- # jq -r . 00:06:45.959 [2024-12-16 19:59:53.596267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.959 [2024-12-16 19:59:53.596384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:06:46.217 [2024-12-16 19:59:53.742384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.475 [2024-12-16 19:59:53.878672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=0x1 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=xor 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=3 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=software 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=32 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=32 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=1 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val=Yes 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:53 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.475 19:59:54 -- accel/accel.sh@21 -- # val= 00:06:46.475 19:59:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.475 19:59:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.475 19:59:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@21 -- # val= 00:06:47.852 19:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@21 -- # val= 00:06:47.852 19:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@21 -- # val= 00:06:47.852 19:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@21 -- # val= 00:06:47.852 19:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@21 -- # val= 00:06:47.852 19:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@21 -- # val= 00:06:47.852 19:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.852 19:59:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.852 19:59:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.852 19:59:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:47.852 19:59:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.852 00:06:47.852 real 0m3.773s 00:06:47.852 user 0m3.362s 00:06:47.852 sys 0m0.207s 00:06:47.852 19:59:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.852 19:59:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.852 ************************************ 00:06:47.852 END TEST accel_xor 00:06:47.852 ************************************ 00:06:47.852 19:59:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:47.852 19:59:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:47.852 19:59:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.852 19:59:55 -- common/autotest_common.sh@10 -- # set +x 00:06:48.110 ************************************ 00:06:48.110 START TEST accel_dif_verify 00:06:48.110 ************************************ 00:06:48.110 19:59:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:48.110 19:59:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.110 19:59:55 -- accel/accel.sh@17 -- # local accel_module 00:06:48.110 19:59:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:48.110 19:59:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:48.110 19:59:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.110 19:59:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.110 19:59:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.110 19:59:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.110 19:59:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.110 19:59:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.110 19:59:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.110 19:59:55 -- accel/accel.sh@42 -- # jq -r . 00:06:48.110 [2024-12-16 19:59:55.528560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.110 [2024-12-16 19:59:55.528664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59144 ] 00:06:48.110 [2024-12-16 19:59:55.674137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.368 [2024-12-16 19:59:55.843694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.269 19:59:57 -- accel/accel.sh@18 -- # out=' 00:06:50.269 SPDK Configuration: 00:06:50.269 Core mask: 0x1 00:06:50.269 00:06:50.269 Accel Perf Configuration: 00:06:50.269 Workload Type: dif_verify 00:06:50.269 Vector size: 4096 bytes 00:06:50.269 Transfer size: 4096 bytes 00:06:50.269 Block size: 512 bytes 00:06:50.269 Metadata size: 8 bytes 00:06:50.269 Vector count 1 00:06:50.269 Module: software 00:06:50.269 Queue depth: 32 00:06:50.269 Allocate depth: 32 00:06:50.269 # threads/core: 1 00:06:50.269 Run time: 1 seconds 00:06:50.269 Verify: No 00:06:50.269 00:06:50.269 Running for 1 seconds... 00:06:50.269 00:06:50.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.269 ------------------------------------------------------------------------------------ 00:06:50.269 0,0 99040/s 392 MiB/s 0 0 00:06:50.269 ==================================================================================== 00:06:50.269 Total 99040/s 386 MiB/s 0 0' 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:50.269 19:59:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.269 19:59:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.269 19:59:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.269 19:59:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.269 19:59:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.269 19:59:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.269 19:59:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.269 19:59:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.269 19:59:57 -- accel/accel.sh@42 -- # jq -r . 00:06:50.269 [2024-12-16 19:59:57.487889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.269 [2024-12-16 19:59:57.488103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59172 ] 00:06:50.269 [2024-12-16 19:59:57.634441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.269 [2024-12-16 19:59:57.782432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=0x1 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=dif_verify 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=software 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=32 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=32 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=1 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val=No 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.269 19:59:57 -- accel/accel.sh@21 -- # val= 00:06:50.269 19:59:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.269 19:59:57 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 19:59:59 -- accel/accel.sh@21 -- # val= 00:06:52.175 19:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # IFS=: 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 19:59:59 -- accel/accel.sh@21 -- # val= 00:06:52.175 19:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # IFS=: 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 19:59:59 -- accel/accel.sh@21 -- # val= 00:06:52.175 19:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # IFS=: 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 19:59:59 -- accel/accel.sh@21 -- # val= 00:06:52.175 19:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # IFS=: 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 19:59:59 -- accel/accel.sh@21 -- # val= 00:06:52.175 19:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # IFS=: 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 19:59:59 -- accel/accel.sh@21 -- # val= 00:06:52.175 19:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # IFS=: 00:06:52.175 19:59:59 -- accel/accel.sh@20 -- # read -r var val 00:06:52.175 ************************************ 00:06:52.175 END TEST accel_dif_verify 00:06:52.176 ************************************ 00:06:52.176 19:59:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.176 19:59:59 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:52.176 19:59:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.176 00:06:52.176 real 0m3.880s 00:06:52.176 user 0m3.431s 00:06:52.176 sys 0m0.245s 00:06:52.176 19:59:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.176 19:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:52.176 19:59:59 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:52.176 19:59:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:52.176 19:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.176 19:59:59 -- common/autotest_common.sh@10 -- # set +x 00:06:52.176 ************************************ 00:06:52.176 START TEST accel_dif_generate 00:06:52.176 ************************************ 00:06:52.176 19:59:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:52.176 19:59:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.176 19:59:59 -- accel/accel.sh@17 -- # local accel_module 00:06:52.176 19:59:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:52.176 19:59:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:52.176 19:59:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.176 19:59:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.176 19:59:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.176 19:59:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.176 19:59:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.176 19:59:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.176 19:59:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.176 19:59:59 -- accel/accel.sh@42 -- # jq -r . 00:06:52.176 [2024-12-16 19:59:59.450810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.176 [2024-12-16 19:59:59.451058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59213 ] 00:06:52.176 [2024-12-16 19:59:59.597139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.176 [2024-12-16 19:59:59.749877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.074 20:00:01 -- accel/accel.sh@18 -- # out=' 00:06:54.074 SPDK Configuration: 00:06:54.074 Core mask: 0x1 00:06:54.074 00:06:54.074 Accel Perf Configuration: 00:06:54.074 Workload Type: dif_generate 00:06:54.074 Vector size: 4096 bytes 00:06:54.074 Transfer size: 4096 bytes 00:06:54.074 Block size: 512 bytes 00:06:54.074 Metadata size: 8 bytes 00:06:54.074 Vector count 1 00:06:54.074 Module: software 00:06:54.074 Queue depth: 32 00:06:54.074 Allocate depth: 32 00:06:54.074 # threads/core: 1 00:06:54.074 Run time: 1 seconds 00:06:54.074 Verify: No 00:06:54.074 00:06:54.074 Running for 1 seconds... 00:06:54.074 00:06:54.074 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.074 ------------------------------------------------------------------------------------ 00:06:54.074 0,0 145888/s 578 MiB/s 0 0 00:06:54.074 ==================================================================================== 00:06:54.074 Total 145888/s 569 MiB/s 0 0' 00:06:54.074 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 20:00:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:54.074 20:00:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:54.074 20:00:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.074 20:00:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.074 20:00:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.074 20:00:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.074 20:00:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.074 20:00:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.074 20:00:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.074 20:00:01 -- accel/accel.sh@42 -- # jq -r . 00:06:54.074 [2024-12-16 20:00:01.376565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.074 [2024-12-16 20:00:01.376673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59239 ] 00:06:54.074 [2024-12-16 20:00:01.525888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.074 [2024-12-16 20:00:01.676182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=0x1 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=dif_generate 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=software 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=32 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=32 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=1 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val=No 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.332 20:00:01 -- accel/accel.sh@21 -- # val= 00:06:54.332 20:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.332 20:00:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 20:00:03 -- accel/accel.sh@21 -- # val= 00:06:55.706 20:00:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 20:00:03 -- accel/accel.sh@21 -- # val= 00:06:55.706 20:00:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 20:00:03 -- accel/accel.sh@21 -- # val= 00:06:55.706 20:00:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 20:00:03 -- accel/accel.sh@21 -- # val= 00:06:55.706 20:00:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 20:00:03 -- accel/accel.sh@21 -- # val= 00:06:55.706 20:00:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 20:00:03 -- accel/accel.sh@21 -- # val= 00:06:55.706 20:00:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.706 20:00:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.706 ************************************ 00:06:55.706 END TEST accel_dif_generate 00:06:55.706 ************************************ 00:06:55.706 20:00:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.706 20:00:03 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:55.706 20:00:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.706 00:06:55.706 real 0m3.857s 00:06:55.706 user 0m3.431s 00:06:55.706 sys 0m0.219s 00:06:55.706 20:00:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.706 20:00:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.706 20:00:03 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:55.706 20:00:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:55.706 20:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.706 20:00:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.706 ************************************ 00:06:55.706 START TEST accel_dif_generate_copy 00:06:55.706 ************************************ 00:06:55.706 20:00:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:55.706 20:00:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.706 20:00:03 -- accel/accel.sh@17 -- # local accel_module 00:06:55.706 20:00:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:55.706 20:00:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:55.706 20:00:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.706 20:00:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.706 20:00:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.706 20:00:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.706 20:00:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.706 20:00:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.706 20:00:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.706 20:00:03 -- accel/accel.sh@42 -- # jq -r . 00:06:55.706 [2024-12-16 20:00:03.345859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.706 [2024-12-16 20:00:03.345963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:06:55.965 [2024-12-16 20:00:03.493515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.223 [2024-12-16 20:00:03.632063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.661 20:00:05 -- accel/accel.sh@18 -- # out=' 00:06:57.661 SPDK Configuration: 00:06:57.661 Core mask: 0x1 00:06:57.661 00:06:57.661 Accel Perf Configuration: 00:06:57.661 Workload Type: dif_generate_copy 00:06:57.661 Vector size: 4096 bytes 00:06:57.661 Transfer size: 4096 bytes 00:06:57.661 Vector count 1 00:06:57.661 Module: software 00:06:57.661 Queue depth: 32 00:06:57.661 Allocate depth: 32 00:06:57.661 # threads/core: 1 00:06:57.661 Run time: 1 seconds 00:06:57.661 Verify: No 00:06:57.661 00:06:57.661 Running for 1 seconds... 00:06:57.661 00:06:57.661 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.661 ------------------------------------------------------------------------------------ 00:06:57.661 0,0 118720/s 470 MiB/s 0 0 00:06:57.661 ==================================================================================== 00:06:57.661 Total 118720/s 463 MiB/s 0 0' 00:06:57.661 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 20:00:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.661 20:00:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.661 20:00:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.661 20:00:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.661 20:00:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.661 20:00:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.661 20:00:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.661 20:00:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.661 20:00:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.661 20:00:05 -- accel/accel.sh@42 -- # jq -r . 00:06:57.661 [2024-12-16 20:00:05.249965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.661 [2024-12-16 20:00:05.250069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:06:57.934 [2024-12-16 20:00:05.389000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.934 [2024-12-16 20:00:05.535385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=0x1 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=software 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=32 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=32 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=1 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val=No 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.194 20:00:05 -- accel/accel.sh@21 -- # val= 00:06:58.194 20:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # IFS=: 00:06:58.194 20:00:05 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@21 -- # val= 00:06:59.577 20:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # IFS=: 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@21 -- # val= 00:06:59.577 20:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # IFS=: 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@21 -- # val= 00:06:59.577 20:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # IFS=: 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@21 -- # val= 00:06:59.577 20:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # IFS=: 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@21 -- # val= 00:06:59.577 20:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # IFS=: 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@21 -- # val= 00:06:59.577 20:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # IFS=: 00:06:59.577 20:00:07 -- accel/accel.sh@20 -- # read -r var val 00:06:59.577 20:00:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.577 20:00:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:59.577 20:00:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.577 00:06:59.577 real 0m3.804s 00:06:59.577 user 0m3.374s 00:06:59.577 sys 0m0.226s 00:06:59.577 20:00:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.578 ************************************ 00:06:59.578 END TEST accel_dif_generate_copy 00:06:59.578 ************************************ 00:06:59.578 20:00:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.578 20:00:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:59.578 20:00:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.578 20:00:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:59.578 20:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.578 20:00:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.578 ************************************ 00:06:59.578 START TEST accel_comp 00:06:59.578 ************************************ 00:06:59.578 20:00:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.578 20:00:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.578 20:00:07 -- accel/accel.sh@17 -- # local accel_module 00:06:59.578 20:00:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.578 20:00:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.578 20:00:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.578 20:00:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.578 20:00:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.578 20:00:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.578 20:00:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.578 20:00:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.578 20:00:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.578 20:00:07 -- accel/accel.sh@42 -- # jq -r . 00:06:59.578 [2024-12-16 20:00:07.184555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.578 [2024-12-16 20:00:07.184650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59347 ] 00:06:59.836 [2024-12-16 20:00:07.328693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.836 [2024-12-16 20:00:07.467466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.736 20:00:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:01.736 00:07:01.736 SPDK Configuration: 00:07:01.736 Core mask: 0x1 00:07:01.736 00:07:01.736 Accel Perf Configuration: 00:07:01.736 Workload Type: compress 00:07:01.736 Transfer size: 4096 bytes 00:07:01.736 Vector count 1 00:07:01.736 Module: software 00:07:01.736 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.736 Queue depth: 32 00:07:01.736 Allocate depth: 32 00:07:01.736 # threads/core: 1 00:07:01.736 Run time: 1 seconds 00:07:01.736 Verify: No 00:07:01.736 00:07:01.736 Running for 1 seconds... 00:07:01.736 00:07:01.736 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.736 ------------------------------------------------------------------------------------ 00:07:01.736 0,0 63968/s 266 MiB/s 0 0 00:07:01.736 ==================================================================================== 00:07:01.736 Total 63968/s 249 MiB/s 0 0' 00:07:01.736 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.736 20:00:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.736 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.736 20:00:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.736 20:00:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.736 20:00:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.736 20:00:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.736 20:00:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.736 20:00:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.736 20:00:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.736 20:00:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.736 20:00:09 -- accel/accel.sh@42 -- # jq -r . 00:07:01.736 [2024-12-16 20:00:09.087293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.736 [2024-12-16 20:00:09.087383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:07:01.736 [2024-12-16 20:00:09.227694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.736 [2024-12-16 20:00:09.373755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=0x1 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=compress 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=software 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=32 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=32 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=1 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.994 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.994 20:00:09 -- accel/accel.sh@21 -- # val=No 00:07:01.994 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.995 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.995 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.995 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.995 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.995 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.995 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.995 20:00:09 -- accel/accel.sh@21 -- # val= 00:07:01.995 20:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.995 20:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.995 20:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@21 -- # val= 00:07:03.369 20:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@21 -- # val= 00:07:03.369 20:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@21 -- # val= 00:07:03.369 20:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@21 -- # val= 00:07:03.369 20:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@21 -- # val= 00:07:03.369 20:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@21 -- # val= 00:07:03.369 20:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.369 20:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.369 20:00:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.369 20:00:10 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:03.369 20:00:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.369 00:07:03.369 real 0m3.813s 00:07:03.369 user 0m3.384s 00:07:03.369 sys 0m0.225s 00:07:03.369 20:00:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.369 ************************************ 00:07:03.369 20:00:10 -- common/autotest_common.sh@10 -- # set +x 00:07:03.369 END TEST accel_comp 00:07:03.369 ************************************ 00:07:03.628 20:00:11 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.628 20:00:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:03.628 20:00:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.628 20:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.628 ************************************ 00:07:03.628 START TEST accel_decomp 00:07:03.628 ************************************ 00:07:03.628 20:00:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.628 20:00:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.628 20:00:11 -- accel/accel.sh@17 -- # local accel_module 00:07:03.628 20:00:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.628 20:00:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.628 20:00:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.628 20:00:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.628 20:00:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.628 20:00:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.628 20:00:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.628 20:00:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.628 20:00:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.628 20:00:11 -- accel/accel.sh@42 -- # jq -r . 00:07:03.628 [2024-12-16 20:00:11.067114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.628 [2024-12-16 20:00:11.067224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59409 ] 00:07:03.628 [2024-12-16 20:00:11.213800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.885 [2024-12-16 20:00:11.399987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.807 20:00:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.807 00:07:05.807 SPDK Configuration: 00:07:05.807 Core mask: 0x1 00:07:05.807 00:07:05.807 Accel Perf Configuration: 00:07:05.807 Workload Type: decompress 00:07:05.807 Transfer size: 4096 bytes 00:07:05.807 Vector count 1 00:07:05.807 Module: software 00:07:05.807 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.807 Queue depth: 32 00:07:05.807 Allocate depth: 32 00:07:05.807 # threads/core: 1 00:07:05.807 Run time: 1 seconds 00:07:05.807 Verify: Yes 00:07:05.807 00:07:05.807 Running for 1 seconds... 00:07:05.807 00:07:05.807 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.807 ------------------------------------------------------------------------------------ 00:07:05.807 0,0 62240/s 114 MiB/s 0 0 00:07:05.807 ==================================================================================== 00:07:05.807 Total 62240/s 243 MiB/s 0 0' 00:07:05.807 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:05.807 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:05.807 20:00:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.807 20:00:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.807 20:00:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.807 20:00:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.807 20:00:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.807 20:00:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.807 20:00:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.807 20:00:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.807 20:00:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.807 20:00:13 -- accel/accel.sh@42 -- # jq -r . 00:07:05.807 [2024-12-16 20:00:13.181734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.807 [2024-12-16 20:00:13.181837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:07:05.807 [2024-12-16 20:00:13.331482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.065 [2024-12-16 20:00:13.506611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=0x1 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=decompress 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=software 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=32 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=32 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=1 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val=Yes 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:06.065 20:00:13 -- accel/accel.sh@21 -- # val= 00:07:06.065 20:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:06.065 20:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@21 -- # val= 00:07:07.979 20:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@21 -- # val= 00:07:07.979 20:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@21 -- # val= 00:07:07.979 20:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@21 -- # val= 00:07:07.979 20:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@21 -- # val= 00:07:07.979 20:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@21 -- # val= 00:07:07.979 20:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.979 20:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.979 20:00:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.979 20:00:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:07.979 20:00:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.979 00:07:07.979 real 0m4.206s 00:07:07.979 user 0m1.899s 00:07:07.979 sys 0m0.124s 00:07:07.979 20:00:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.979 20:00:15 -- common/autotest_common.sh@10 -- # set +x 00:07:07.979 ************************************ 00:07:07.979 END TEST accel_decomp 00:07:07.979 ************************************ 00:07:07.979 20:00:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.979 20:00:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:07.979 20:00:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.979 20:00:15 -- common/autotest_common.sh@10 -- # set +x 00:07:07.979 ************************************ 00:07:07.979 START TEST accel_decmop_full 00:07:07.979 ************************************ 00:07:07.979 20:00:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.979 20:00:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.979 20:00:15 -- accel/accel.sh@17 -- # local accel_module 00:07:07.979 20:00:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.979 20:00:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.979 20:00:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.979 20:00:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.979 20:00:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.979 20:00:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.979 20:00:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.979 20:00:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.979 20:00:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.979 20:00:15 -- accel/accel.sh@42 -- # jq -r . 00:07:07.979 [2024-12-16 20:00:15.302880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.979 [2024-12-16 20:00:15.303013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59476 ] 00:07:07.979 [2024-12-16 20:00:15.461215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.240 [2024-12-16 20:00:15.663438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.148 20:00:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.148 00:07:10.148 SPDK Configuration: 00:07:10.148 Core mask: 0x1 00:07:10.148 00:07:10.148 Accel Perf Configuration: 00:07:10.148 Workload Type: decompress 00:07:10.148 Transfer size: 111250 bytes 00:07:10.148 Vector count 1 00:07:10.148 Module: software 00:07:10.148 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.148 Queue depth: 32 00:07:10.148 Allocate depth: 32 00:07:10.148 # threads/core: 1 00:07:10.148 Run time: 1 seconds 00:07:10.148 Verify: Yes 00:07:10.148 00:07:10.148 Running for 1 seconds... 00:07:10.148 00:07:10.148 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.148 ------------------------------------------------------------------------------------ 00:07:10.148 0,0 4288/s 177 MiB/s 0 0 00:07:10.148 ==================================================================================== 00:07:10.148 Total 4288/s 454 MiB/s 0 0' 00:07:10.148 20:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.148 20:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.148 20:00:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.148 20:00:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.148 20:00:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.148 20:00:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.148 20:00:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.148 20:00:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.148 20:00:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.148 20:00:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.148 20:00:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.148 20:00:17 -- accel/accel.sh@42 -- # jq -r . 00:07:10.148 [2024-12-16 20:00:17.532938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.148 [2024-12-16 20:00:17.533262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:07:10.148 [2024-12-16 20:00:17.695568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.407 [2024-12-16 20:00:17.882556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=0x1 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=decompress 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=software 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=32 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=32 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=1 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val=Yes 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 20:00:18 -- accel/accel.sh@21 -- # val= 00:07:10.407 20:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 20:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@21 -- # val= 00:07:12.307 20:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@21 -- # val= 00:07:12.307 20:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@21 -- # val= 00:07:12.307 20:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@21 -- # val= 00:07:12.307 20:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@21 -- # val= 00:07:12.307 20:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@21 -- # val= 00:07:12.307 20:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.307 20:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.307 20:00:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.307 20:00:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.307 ************************************ 00:07:12.307 END TEST accel_decmop_full 00:07:12.307 ************************************ 00:07:12.307 20:00:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.307 00:07:12.307 real 0m4.384s 00:07:12.307 user 0m3.902s 00:07:12.307 sys 0m0.262s 00:07:12.307 20:00:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.307 20:00:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.307 20:00:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.307 20:00:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:12.307 20:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.307 20:00:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.307 ************************************ 00:07:12.307 START TEST accel_decomp_mcore 00:07:12.307 ************************************ 00:07:12.307 20:00:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.307 20:00:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.307 20:00:19 -- accel/accel.sh@17 -- # local accel_module 00:07:12.307 20:00:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.307 20:00:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.307 20:00:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.307 20:00:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.307 20:00:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.307 20:00:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.307 20:00:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.307 20:00:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.307 20:00:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.307 20:00:19 -- accel/accel.sh@42 -- # jq -r . 00:07:12.307 [2024-12-16 20:00:19.745462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.307 [2024-12-16 20:00:19.745565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:07:12.307 [2024-12-16 20:00:19.892898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.565 [2024-12-16 20:00:20.071961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.565 [2024-12-16 20:00:20.072370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.565 [2024-12-16 20:00:20.072937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.565 [2024-12-16 20:00:20.073038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.463 20:00:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.464 00:07:14.464 SPDK Configuration: 00:07:14.464 Core mask: 0xf 00:07:14.464 00:07:14.464 Accel Perf Configuration: 00:07:14.464 Workload Type: decompress 00:07:14.464 Transfer size: 4096 bytes 00:07:14.464 Vector count 1 00:07:14.464 Module: software 00:07:14.464 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.464 Queue depth: 32 00:07:14.464 Allocate depth: 32 00:07:14.464 # threads/core: 1 00:07:14.464 Run time: 1 seconds 00:07:14.464 Verify: Yes 00:07:14.464 00:07:14.464 Running for 1 seconds... 00:07:14.464 00:07:14.464 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.464 ------------------------------------------------------------------------------------ 00:07:14.464 0,0 58048/s 106 MiB/s 0 0 00:07:14.464 3,0 57856/s 106 MiB/s 0 0 00:07:14.464 2,0 58112/s 107 MiB/s 0 0 00:07:14.464 1,0 57760/s 106 MiB/s 0 0 00:07:14.464 ==================================================================================== 00:07:14.464 Total 231776/s 905 MiB/s 0 0' 00:07:14.464 20:00:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.464 20:00:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.464 20:00:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.464 20:00:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.464 20:00:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.464 20:00:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.464 20:00:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.464 20:00:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.464 20:00:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.464 20:00:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.464 20:00:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.464 20:00:21 -- accel/accel.sh@42 -- # jq -r . 00:07:14.464 [2024-12-16 20:00:21.875097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.464 [2024-12-16 20:00:21.875318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:07:14.464 [2024-12-16 20:00:22.025034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.722 [2024-12-16 20:00:22.201363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.722 [2024-12-16 20:00:22.201496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.722 [2024-12-16 20:00:22.201822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.722 [2024-12-16 20:00:22.201840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=0xf 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=decompress 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=software 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=32 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=32 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=1 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val=Yes 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.722 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.722 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.722 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.723 20:00:22 -- accel/accel.sh@21 -- # val= 00:07:14.723 20:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.723 20:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.723 20:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@21 -- # val= 00:07:16.623 20:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.623 20:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.623 20:00:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.623 20:00:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.623 20:00:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.623 00:07:16.623 real 0m4.120s 00:07:16.623 user 0m12.331s 00:07:16.623 sys 0m0.283s 00:07:16.623 20:00:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.623 20:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:16.623 ************************************ 00:07:16.623 END TEST accel_decomp_mcore 00:07:16.623 ************************************ 00:07:16.624 20:00:23 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.624 20:00:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:16.624 20:00:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.624 20:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:16.624 ************************************ 00:07:16.624 START TEST accel_decomp_full_mcore 00:07:16.624 ************************************ 00:07:16.624 20:00:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.624 20:00:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.624 20:00:23 -- accel/accel.sh@17 -- # local accel_module 00:07:16.624 20:00:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.624 20:00:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.624 20:00:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.624 20:00:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.624 20:00:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.624 20:00:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.624 20:00:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.624 20:00:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.624 20:00:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.624 20:00:23 -- accel/accel.sh@42 -- # jq -r . 00:07:16.624 [2024-12-16 20:00:23.900687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.624 [2024-12-16 20:00:23.900765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59624 ] 00:07:16.624 [2024-12-16 20:00:24.036216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.624 [2024-12-16 20:00:24.175756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.624 [2024-12-16 20:00:24.175874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.624 [2024-12-16 20:00:24.176108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.624 [2024-12-16 20:00:24.176130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.529 20:00:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.529 00:07:18.529 SPDK Configuration: 00:07:18.529 Core mask: 0xf 00:07:18.529 00:07:18.529 Accel Perf Configuration: 00:07:18.529 Workload Type: decompress 00:07:18.529 Transfer size: 111250 bytes 00:07:18.529 Vector count 1 00:07:18.529 Module: software 00:07:18.529 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.529 Queue depth: 32 00:07:18.529 Allocate depth: 32 00:07:18.529 # threads/core: 1 00:07:18.529 Run time: 1 seconds 00:07:18.529 Verify: Yes 00:07:18.529 00:07:18.529 Running for 1 seconds... 00:07:18.529 00:07:18.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.529 ------------------------------------------------------------------------------------ 00:07:18.529 0,0 5632/s 232 MiB/s 0 0 00:07:18.529 3,0 4352/s 179 MiB/s 0 0 00:07:18.529 2,0 4320/s 178 MiB/s 0 0 00:07:18.529 1,0 4320/s 178 MiB/s 0 0 00:07:18.529 ==================================================================================== 00:07:18.529 Total 18624/s 1975 MiB/s 0 0' 00:07:18.529 20:00:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.529 20:00:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.529 20:00:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.529 20:00:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.529 20:00:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.529 20:00:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.529 20:00:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.529 20:00:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.529 20:00:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.529 20:00:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.529 20:00:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.529 20:00:25 -- accel/accel.sh@42 -- # jq -r . 00:07:18.529 [2024-12-16 20:00:25.833777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.529 [2024-12-16 20:00:25.833990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:07:18.529 [2024-12-16 20:00:25.979499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.529 [2024-12-16 20:00:26.119415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.529 [2024-12-16 20:00:26.120139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.530 [2024-12-16 20:00:26.120321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.530 [2024-12-16 20:00:26.120364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=0xf 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=decompress 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=software 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=32 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=32 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=1 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val=Yes 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:18.791 20:00:26 -- accel/accel.sh@21 -- # val= 00:07:18.791 20:00:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # IFS=: 00:07:18.791 20:00:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@21 -- # val= 00:07:20.179 20:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.179 20:00:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.179 20:00:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.179 20:00:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.179 20:00:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.179 ************************************ 00:07:20.179 END TEST accel_decomp_full_mcore 00:07:20.179 ************************************ 00:07:20.179 00:07:20.179 real 0m3.873s 00:07:20.179 user 0m11.796s 00:07:20.179 sys 0m0.274s 00:07:20.179 20:00:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.179 20:00:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.179 20:00:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.179 20:00:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:20.179 20:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.179 20:00:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.179 ************************************ 00:07:20.179 START TEST accel_decomp_mthread 00:07:20.179 ************************************ 00:07:20.179 20:00:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.179 20:00:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.179 20:00:27 -- accel/accel.sh@17 -- # local accel_module 00:07:20.179 20:00:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.179 20:00:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.179 20:00:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.179 20:00:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.179 20:00:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.179 20:00:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.179 20:00:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.179 20:00:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.179 20:00:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.179 20:00:27 -- accel/accel.sh@42 -- # jq -r . 00:07:20.179 [2024-12-16 20:00:27.816424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.179 [2024-12-16 20:00:27.816830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59692 ] 00:07:20.439 [2024-12-16 20:00:27.954045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.699 [2024-12-16 20:00:28.091779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.086 20:00:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.086 00:07:22.086 SPDK Configuration: 00:07:22.086 Core mask: 0x1 00:07:22.086 00:07:22.086 Accel Perf Configuration: 00:07:22.086 Workload Type: decompress 00:07:22.086 Transfer size: 4096 bytes 00:07:22.086 Vector count 1 00:07:22.086 Module: software 00:07:22.086 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.086 Queue depth: 32 00:07:22.086 Allocate depth: 32 00:07:22.086 # threads/core: 2 00:07:22.086 Run time: 1 seconds 00:07:22.086 Verify: Yes 00:07:22.086 00:07:22.086 Running for 1 seconds... 00:07:22.086 00:07:22.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.086 ------------------------------------------------------------------------------------ 00:07:22.086 0,1 40896/s 75 MiB/s 0 0 00:07:22.086 0,0 40800/s 75 MiB/s 0 0 00:07:22.086 ==================================================================================== 00:07:22.086 Total 81696/s 319 MiB/s 0 0' 00:07:22.086 20:00:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.086 20:00:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.086 20:00:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.086 20:00:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.086 20:00:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.086 20:00:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.086 20:00:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.086 20:00:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.086 20:00:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.086 20:00:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.086 20:00:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.086 20:00:29 -- accel/accel.sh@42 -- # jq -r . 00:07:22.347 [2024-12-16 20:00:29.727845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.347 [2024-12-16 20:00:29.727949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:07:22.347 [2024-12-16 20:00:29.874793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.609 [2024-12-16 20:00:30.023460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=0x1 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=decompress 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=software 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=32 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=32 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=2 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val=Yes 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:22.609 20:00:30 -- accel/accel.sh@21 -- # val= 00:07:22.609 20:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # IFS=: 00:07:22.609 20:00:30 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@21 -- # val= 00:07:23.997 20:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # IFS=: 00:07:23.997 20:00:31 -- accel/accel.sh@20 -- # read -r var val 00:07:23.997 20:00:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.997 20:00:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.997 20:00:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.997 ************************************ 00:07:23.997 END TEST accel_decomp_mthread 00:07:23.997 ************************************ 00:07:23.997 00:07:23.997 real 0m3.837s 00:07:23.997 user 0m3.413s 00:07:23.997 sys 0m0.216s 00:07:23.997 20:00:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.997 20:00:31 -- common/autotest_common.sh@10 -- # set +x 00:07:24.258 20:00:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.258 20:00:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:24.258 20:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.258 20:00:31 -- common/autotest_common.sh@10 -- # set +x 00:07:24.258 ************************************ 00:07:24.258 START TEST accel_deomp_full_mthread 00:07:24.258 ************************************ 00:07:24.258 20:00:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.258 20:00:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.258 20:00:31 -- accel/accel.sh@17 -- # local accel_module 00:07:24.258 20:00:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.258 20:00:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.258 20:00:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.258 20:00:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.258 20:00:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.258 20:00:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.258 20:00:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.258 20:00:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.258 20:00:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.258 20:00:31 -- accel/accel.sh@42 -- # jq -r . 00:07:24.258 [2024-12-16 20:00:31.697374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.258 [2024-12-16 20:00:31.697469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 00:07:24.258 [2024-12-16 20:00:31.842541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.519 [2024-12-16 20:00:31.990419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.432 20:00:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.432 00:07:26.432 SPDK Configuration: 00:07:26.432 Core mask: 0x1 00:07:26.432 00:07:26.432 Accel Perf Configuration: 00:07:26.432 Workload Type: decompress 00:07:26.432 Transfer size: 111250 bytes 00:07:26.432 Vector count 1 00:07:26.432 Module: software 00:07:26.432 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.432 Queue depth: 32 00:07:26.432 Allocate depth: 32 00:07:26.432 # threads/core: 2 00:07:26.432 Run time: 1 seconds 00:07:26.432 Verify: Yes 00:07:26.432 00:07:26.432 Running for 1 seconds... 00:07:26.432 00:07:26.432 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.432 ------------------------------------------------------------------------------------ 00:07:26.432 0,1 2720/s 112 MiB/s 0 0 00:07:26.432 0,0 2720/s 112 MiB/s 0 0 00:07:26.432 ==================================================================================== 00:07:26.432 Total 5440/s 577 MiB/s 0 0' 00:07:26.432 20:00:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.432 20:00:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.432 20:00:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.432 20:00:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.432 20:00:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.432 20:00:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.432 20:00:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.432 20:00:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.432 20:00:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.432 20:00:33 -- accel/accel.sh@42 -- # jq -r . 00:07:26.432 [2024-12-16 20:00:33.653781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.432 [2024-12-16 20:00:33.653878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:07:26.432 [2024-12-16 20:00:33.799024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.432 [2024-12-16 20:00:33.945672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val=0x1 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val=decompress 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val=software 00:07:26.432 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.432 20:00:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.432 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.432 20:00:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.693 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val=32 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val=32 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val=2 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val=Yes 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:26.694 20:00:34 -- accel/accel.sh@21 -- # val= 00:07:26.694 20:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # IFS=: 00:07:26.694 20:00:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@21 -- # val= 00:07:28.077 20:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.077 20:00:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.077 20:00:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.077 20:00:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.077 20:00:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.077 00:07:28.077 real 0m3.916s 00:07:28.077 user 0m3.469s 00:07:28.077 sys 0m0.238s 00:07:28.077 20:00:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.077 20:00:35 -- common/autotest_common.sh@10 -- # set +x 00:07:28.077 ************************************ 00:07:28.077 END TEST accel_deomp_full_mthread 00:07:28.077 ************************************ 00:07:28.077 20:00:35 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:28.077 20:00:35 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.077 20:00:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.077 20:00:35 -- accel/accel.sh@129 -- # build_accel_config 00:07:28.077 20:00:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.077 20:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.077 20:00:35 -- common/autotest_common.sh@10 -- # set +x 00:07:28.077 20:00:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.077 20:00:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.077 20:00:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.077 20:00:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.077 20:00:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.077 20:00:35 -- accel/accel.sh@42 -- # jq -r . 00:07:28.077 ************************************ 00:07:28.077 START TEST accel_dif_functional_tests 00:07:28.077 ************************************ 00:07:28.077 20:00:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.077 [2024-12-16 20:00:35.677880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.077 [2024-12-16 20:00:35.678099] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59827 ] 00:07:28.339 [2024-12-16 20:00:35.823006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.339 [2024-12-16 20:00:35.973266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.339 [2024-12-16 20:00:35.973344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.339 [2024-12-16 20:00:35.973367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.600 00:07:28.600 00:07:28.600 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.600 http://cunit.sourceforge.net/ 00:07:28.600 00:07:28.600 00:07:28.600 Suite: accel_dif 00:07:28.600 Test: verify: DIF generated, GUARD check ...passed 00:07:28.600 Test: verify: DIF generated, APPTAG check ...passed 00:07:28.600 Test: verify: DIF generated, REFTAG check ...passed 00:07:28.600 Test: verify: DIF not generated, GUARD check ...[2024-12-16 20:00:36.152206] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.600 [2024-12-16 20:00:36.152305] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.600 passed 00:07:28.600 Test: verify: DIF not generated, APPTAG check ...passed 00:07:28.600 Test: verify: DIF not generated, REFTAG check ...[2024-12-16 20:00:36.152468] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.600 [2024-12-16 20:00:36.152521] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.600 [2024-12-16 20:00:36.152577] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.600 [2024-12-16 20:00:36.152616] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.600 passed 00:07:28.600 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:28.600 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:28.600 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-12-16 20:00:36.152782] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:28.600 passed 00:07:28.600 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:28.600 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:28.600 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-16 20:00:36.153052] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:28.600 passed 00:07:28.600 Test: generate copy: DIF generated, GUARD check ...passed 00:07:28.600 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:28.600 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:28.600 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:28.600 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:28.600 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:28.600 Test: generate copy: iovecs-len validate ...[2024-12-16 20:00:36.153646] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:28.600 passed 00:07:28.600 Test: generate copy: buffer alignment validate ...passed 00:07:28.600 00:07:28.600 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.600 suites 1 1 n/a 0 0 00:07:28.600 tests 20 20 20 0 0 00:07:28.600 asserts 204 204 204 0 n/a 00:07:28.600 00:07:28.600 Elapsed time = 0.005 seconds 00:07:29.173 00:07:29.173 real 0m1.152s 00:07:29.173 user 0m2.056s 00:07:29.173 sys 0m0.153s 00:07:29.173 20:00:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.173 20:00:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.173 ************************************ 00:07:29.173 END TEST accel_dif_functional_tests 00:07:29.173 ************************************ 00:07:29.173 00:07:29.173 real 1m27.098s 00:07:29.173 user 1m35.201s 00:07:29.173 sys 0m6.177s 00:07:29.173 20:00:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.173 20:00:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.173 ************************************ 00:07:29.173 END TEST accel 00:07:29.173 ************************************ 00:07:29.435 20:00:36 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:29.435 20:00:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.435 20:00:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.435 20:00:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.435 ************************************ 00:07:29.435 START TEST accel_rpc 00:07:29.435 ************************************ 00:07:29.435 20:00:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:29.435 * Looking for test storage... 00:07:29.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:29.435 20:00:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:29.435 20:00:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:29.435 20:00:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:29.435 20:00:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:29.435 20:00:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:29.435 20:00:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:29.435 20:00:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:29.435 20:00:36 -- scripts/common.sh@335 -- # IFS=.-: 00:07:29.435 20:00:36 -- scripts/common.sh@335 -- # read -ra ver1 00:07:29.435 20:00:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.435 20:00:36 -- scripts/common.sh@336 -- # read -ra ver2 00:07:29.435 20:00:36 -- scripts/common.sh@337 -- # local 'op=<' 00:07:29.435 20:00:36 -- scripts/common.sh@339 -- # ver1_l=2 00:07:29.435 20:00:36 -- scripts/common.sh@340 -- # ver2_l=1 00:07:29.435 20:00:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:29.435 20:00:36 -- scripts/common.sh@343 -- # case "$op" in 00:07:29.435 20:00:36 -- scripts/common.sh@344 -- # : 1 00:07:29.435 20:00:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:29.435 20:00:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.435 20:00:36 -- scripts/common.sh@364 -- # decimal 1 00:07:29.435 20:00:36 -- scripts/common.sh@352 -- # local d=1 00:07:29.435 20:00:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.435 20:00:36 -- scripts/common.sh@354 -- # echo 1 00:07:29.435 20:00:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:29.435 20:00:36 -- scripts/common.sh@365 -- # decimal 2 00:07:29.435 20:00:36 -- scripts/common.sh@352 -- # local d=2 00:07:29.435 20:00:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.435 20:00:36 -- scripts/common.sh@354 -- # echo 2 00:07:29.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.435 20:00:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:29.435 20:00:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:29.435 20:00:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:29.435 20:00:36 -- scripts/common.sh@367 -- # return 0 00:07:29.435 20:00:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.435 20:00:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.435 --rc genhtml_branch_coverage=1 00:07:29.435 --rc genhtml_function_coverage=1 00:07:29.435 --rc genhtml_legend=1 00:07:29.435 --rc geninfo_all_blocks=1 00:07:29.435 --rc geninfo_unexecuted_blocks=1 00:07:29.435 00:07:29.435 ' 00:07:29.435 20:00:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.435 --rc genhtml_branch_coverage=1 00:07:29.435 --rc genhtml_function_coverage=1 00:07:29.435 --rc genhtml_legend=1 00:07:29.435 --rc geninfo_all_blocks=1 00:07:29.435 --rc geninfo_unexecuted_blocks=1 00:07:29.435 00:07:29.435 ' 00:07:29.435 20:00:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.435 --rc genhtml_branch_coverage=1 00:07:29.435 --rc genhtml_function_coverage=1 00:07:29.435 --rc genhtml_legend=1 00:07:29.435 --rc geninfo_all_blocks=1 00:07:29.435 --rc geninfo_unexecuted_blocks=1 00:07:29.435 00:07:29.435 ' 00:07:29.435 20:00:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.435 --rc genhtml_branch_coverage=1 00:07:29.435 --rc genhtml_function_coverage=1 00:07:29.435 --rc genhtml_legend=1 00:07:29.435 --rc geninfo_all_blocks=1 00:07:29.435 --rc geninfo_unexecuted_blocks=1 00:07:29.435 00:07:29.435 ' 00:07:29.435 20:00:36 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.435 20:00:36 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59905 00:07:29.435 20:00:36 -- accel/accel_rpc.sh@15 -- # waitforlisten 59905 00:07:29.435 20:00:36 -- common/autotest_common.sh@829 -- # '[' -z 59905 ']' 00:07:29.435 20:00:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.435 20:00:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.435 20:00:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.435 20:00:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.435 20:00:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.435 20:00:36 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:29.435 [2024-12-16 20:00:37.044263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.436 [2024-12-16 20:00:37.044392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59905 ] 00:07:29.697 [2024-12-16 20:00:37.193975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.958 [2024-12-16 20:00:37.345158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.958 [2024-12-16 20:00:37.345333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.219 20:00:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.219 20:00:37 -- common/autotest_common.sh@862 -- # return 0 00:07:30.219 20:00:37 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:30.219 20:00:37 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:30.219 20:00:37 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:30.219 20:00:37 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:30.219 20:00:37 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:30.219 20:00:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.219 20:00:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.219 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:07:30.219 ************************************ 00:07:30.219 START TEST accel_assign_opcode 00:07:30.219 ************************************ 00:07:30.219 20:00:37 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:30.219 20:00:37 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:30.219 20:00:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.219 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:07:30.219 [2024-12-16 20:00:37.858621] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:30.480 20:00:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.480 20:00:37 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:30.480 20:00:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.480 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:07:30.480 [2024-12-16 20:00:37.866578] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:30.480 20:00:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.480 20:00:37 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:30.480 20:00:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.480 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:07:30.742 20:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.742 20:00:38 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:30.742 20:00:38 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:30.742 20:00:38 -- accel/accel_rpc.sh@42 -- # grep software 00:07:30.742 20:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.742 20:00:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.742 20:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.742 software 00:07:30.742 ************************************ 00:07:30.742 END TEST accel_assign_opcode 00:07:30.742 ************************************ 00:07:30.742 00:07:30.742 real 0m0.485s 00:07:30.742 user 0m0.032s 00:07:30.742 sys 0m0.010s 00:07:30.742 20:00:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.742 20:00:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.742 20:00:38 -- accel/accel_rpc.sh@55 -- # killprocess 59905 00:07:30.742 20:00:38 -- common/autotest_common.sh@936 -- # '[' -z 59905 ']' 00:07:30.742 20:00:38 -- common/autotest_common.sh@940 -- # kill -0 59905 00:07:30.742 20:00:38 -- common/autotest_common.sh@941 -- # uname 00:07:30.742 20:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:30.742 20:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59905 00:07:31.003 killing process with pid 59905 00:07:31.003 20:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:31.003 20:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:31.003 20:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59905' 00:07:31.003 20:00:38 -- common/autotest_common.sh@955 -- # kill 59905 00:07:31.003 20:00:38 -- common/autotest_common.sh@960 -- # wait 59905 00:07:31.968 ************************************ 00:07:31.968 END TEST accel_rpc 00:07:31.968 ************************************ 00:07:31.968 00:07:31.968 real 0m2.743s 00:07:31.968 user 0m2.719s 00:07:31.968 sys 0m0.358s 00:07:31.968 20:00:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.968 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.229 20:00:39 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:32.229 20:00:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.229 20:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.229 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.229 ************************************ 00:07:32.229 START TEST app_cmdline 00:07:32.229 ************************************ 00:07:32.229 20:00:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:32.229 * Looking for test storage... 00:07:32.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:32.229 20:00:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:32.229 20:00:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:32.229 20:00:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:32.229 20:00:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:32.229 20:00:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:32.229 20:00:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:32.229 20:00:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:32.229 20:00:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:32.229 20:00:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:32.229 20:00:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.229 20:00:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:32.229 20:00:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:32.229 20:00:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:32.229 20:00:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:32.229 20:00:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:32.229 20:00:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:32.229 20:00:39 -- scripts/common.sh@344 -- # : 1 00:07:32.229 20:00:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:32.229 20:00:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.229 20:00:39 -- scripts/common.sh@364 -- # decimal 1 00:07:32.229 20:00:39 -- scripts/common.sh@352 -- # local d=1 00:07:32.229 20:00:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.229 20:00:39 -- scripts/common.sh@354 -- # echo 1 00:07:32.229 20:00:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:32.229 20:00:39 -- scripts/common.sh@365 -- # decimal 2 00:07:32.229 20:00:39 -- scripts/common.sh@352 -- # local d=2 00:07:32.229 20:00:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.229 20:00:39 -- scripts/common.sh@354 -- # echo 2 00:07:32.229 20:00:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:32.229 20:00:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:32.229 20:00:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:32.229 20:00:39 -- scripts/common.sh@367 -- # return 0 00:07:32.229 20:00:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.229 20:00:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.229 --rc genhtml_branch_coverage=1 00:07:32.229 --rc genhtml_function_coverage=1 00:07:32.229 --rc genhtml_legend=1 00:07:32.229 --rc geninfo_all_blocks=1 00:07:32.229 --rc geninfo_unexecuted_blocks=1 00:07:32.229 00:07:32.229 ' 00:07:32.229 20:00:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.229 --rc genhtml_branch_coverage=1 00:07:32.229 --rc genhtml_function_coverage=1 00:07:32.229 --rc genhtml_legend=1 00:07:32.229 --rc geninfo_all_blocks=1 00:07:32.229 --rc geninfo_unexecuted_blocks=1 00:07:32.229 00:07:32.229 ' 00:07:32.229 20:00:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.229 --rc genhtml_branch_coverage=1 00:07:32.229 --rc genhtml_function_coverage=1 00:07:32.229 --rc genhtml_legend=1 00:07:32.229 --rc geninfo_all_blocks=1 00:07:32.229 --rc geninfo_unexecuted_blocks=1 00:07:32.229 00:07:32.229 ' 00:07:32.229 20:00:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.229 --rc genhtml_branch_coverage=1 00:07:32.229 --rc genhtml_function_coverage=1 00:07:32.229 --rc genhtml_legend=1 00:07:32.229 --rc geninfo_all_blocks=1 00:07:32.229 --rc geninfo_unexecuted_blocks=1 00:07:32.229 00:07:32.229 ' 00:07:32.229 20:00:39 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:32.229 20:00:39 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60017 00:07:32.229 20:00:39 -- app/cmdline.sh@18 -- # waitforlisten 60017 00:07:32.229 20:00:39 -- common/autotest_common.sh@829 -- # '[' -z 60017 ']' 00:07:32.229 20:00:39 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:32.230 20:00:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.230 20:00:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.230 20:00:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.230 20:00:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.230 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.230 [2024-12-16 20:00:39.822580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.230 [2024-12-16 20:00:39.822813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:07:32.490 [2024-12-16 20:00:39.963239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.490 [2024-12-16 20:00:40.105672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.490 [2024-12-16 20:00:40.105828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.063 20:00:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.063 20:00:40 -- common/autotest_common.sh@862 -- # return 0 00:07:33.063 20:00:40 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:33.327 { 00:07:33.327 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:33.327 "fields": { 00:07:33.327 "major": 24, 00:07:33.327 "minor": 1, 00:07:33.327 "patch": 1, 00:07:33.327 "suffix": "-pre", 00:07:33.327 "commit": "c13c99a5e" 00:07:33.327 } 00:07:33.327 } 00:07:33.327 20:00:40 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:33.327 20:00:40 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:33.328 20:00:40 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:33.328 20:00:40 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:33.328 20:00:40 -- app/cmdline.sh@26 -- # sort 00:07:33.328 20:00:40 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:33.328 20:00:40 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:33.328 20:00:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.328 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:07:33.328 20:00:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.328 20:00:40 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:33.328 20:00:40 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:33.328 20:00:40 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.328 20:00:40 -- common/autotest_common.sh@650 -- # local es=0 00:07:33.328 20:00:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.328 20:00:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.328 20:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.328 20:00:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.328 20:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.328 20:00:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.328 20:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.328 20:00:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.328 20:00:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:33.328 20:00:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.589 request: 00:07:33.589 { 00:07:33.589 "method": "env_dpdk_get_mem_stats", 00:07:33.589 "req_id": 1 00:07:33.589 } 00:07:33.589 Got JSON-RPC error response 00:07:33.589 response: 00:07:33.589 { 00:07:33.589 "code": -32601, 00:07:33.589 "message": "Method not found" 00:07:33.589 } 00:07:33.589 20:00:41 -- common/autotest_common.sh@653 -- # es=1 00:07:33.589 20:00:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.589 20:00:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.589 20:00:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.589 20:00:41 -- app/cmdline.sh@1 -- # killprocess 60017 00:07:33.589 20:00:41 -- common/autotest_common.sh@936 -- # '[' -z 60017 ']' 00:07:33.589 20:00:41 -- common/autotest_common.sh@940 -- # kill -0 60017 00:07:33.589 20:00:41 -- common/autotest_common.sh@941 -- # uname 00:07:33.589 20:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:33.589 20:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60017 00:07:33.589 killing process with pid 60017 00:07:33.589 20:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:33.589 20:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:33.589 20:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60017' 00:07:33.589 20:00:41 -- common/autotest_common.sh@955 -- # kill 60017 00:07:33.589 20:00:41 -- common/autotest_common.sh@960 -- # wait 60017 00:07:34.975 ************************************ 00:07:34.975 END TEST app_cmdline 00:07:34.975 ************************************ 00:07:34.975 00:07:34.975 real 0m2.623s 00:07:34.975 user 0m2.911s 00:07:34.975 sys 0m0.395s 00:07:34.975 20:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.975 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.975 20:00:42 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.975 20:00:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.975 20:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.975 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.975 ************************************ 00:07:34.975 START TEST version 00:07:34.975 ************************************ 00:07:34.975 20:00:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.975 * Looking for test storage... 00:07:34.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:34.975 20:00:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:34.975 20:00:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:34.975 20:00:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:34.975 20:00:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:34.975 20:00:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:34.975 20:00:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:34.975 20:00:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:34.975 20:00:42 -- scripts/common.sh@335 -- # IFS=.-: 00:07:34.975 20:00:42 -- scripts/common.sh@335 -- # read -ra ver1 00:07:34.975 20:00:42 -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.975 20:00:42 -- scripts/common.sh@336 -- # read -ra ver2 00:07:34.975 20:00:42 -- scripts/common.sh@337 -- # local 'op=<' 00:07:34.975 20:00:42 -- scripts/common.sh@339 -- # ver1_l=2 00:07:34.975 20:00:42 -- scripts/common.sh@340 -- # ver2_l=1 00:07:34.975 20:00:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:34.975 20:00:42 -- scripts/common.sh@343 -- # case "$op" in 00:07:34.975 20:00:42 -- scripts/common.sh@344 -- # : 1 00:07:34.975 20:00:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:34.975 20:00:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.975 20:00:42 -- scripts/common.sh@364 -- # decimal 1 00:07:34.975 20:00:42 -- scripts/common.sh@352 -- # local d=1 00:07:34.975 20:00:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.975 20:00:42 -- scripts/common.sh@354 -- # echo 1 00:07:34.975 20:00:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:34.975 20:00:42 -- scripts/common.sh@365 -- # decimal 2 00:07:34.975 20:00:42 -- scripts/common.sh@352 -- # local d=2 00:07:34.975 20:00:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.975 20:00:42 -- scripts/common.sh@354 -- # echo 2 00:07:34.975 20:00:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:34.975 20:00:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:34.975 20:00:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:34.975 20:00:42 -- scripts/common.sh@367 -- # return 0 00:07:34.975 20:00:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.975 20:00:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.975 --rc genhtml_branch_coverage=1 00:07:34.975 --rc genhtml_function_coverage=1 00:07:34.975 --rc genhtml_legend=1 00:07:34.975 --rc geninfo_all_blocks=1 00:07:34.975 --rc geninfo_unexecuted_blocks=1 00:07:34.975 00:07:34.975 ' 00:07:34.975 20:00:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.975 --rc genhtml_branch_coverage=1 00:07:34.975 --rc genhtml_function_coverage=1 00:07:34.975 --rc genhtml_legend=1 00:07:34.975 --rc geninfo_all_blocks=1 00:07:34.975 --rc geninfo_unexecuted_blocks=1 00:07:34.975 00:07:34.975 ' 00:07:34.975 20:00:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.975 --rc genhtml_branch_coverage=1 00:07:34.975 --rc genhtml_function_coverage=1 00:07:34.975 --rc genhtml_legend=1 00:07:34.975 --rc geninfo_all_blocks=1 00:07:34.975 --rc geninfo_unexecuted_blocks=1 00:07:34.975 00:07:34.975 ' 00:07:34.975 20:00:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.975 --rc genhtml_branch_coverage=1 00:07:34.975 --rc genhtml_function_coverage=1 00:07:34.975 --rc genhtml_legend=1 00:07:34.975 --rc geninfo_all_blocks=1 00:07:34.975 --rc geninfo_unexecuted_blocks=1 00:07:34.975 00:07:34.975 ' 00:07:34.975 20:00:42 -- app/version.sh@17 -- # get_header_version major 00:07:34.975 20:00:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.975 20:00:42 -- app/version.sh@14 -- # cut -f2 00:07:34.975 20:00:42 -- app/version.sh@14 -- # tr -d '"' 00:07:34.975 20:00:42 -- app/version.sh@17 -- # major=24 00:07:34.975 20:00:42 -- app/version.sh@18 -- # get_header_version minor 00:07:34.975 20:00:42 -- app/version.sh@14 -- # cut -f2 00:07:34.975 20:00:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.975 20:00:42 -- app/version.sh@14 -- # tr -d '"' 00:07:34.975 20:00:42 -- app/version.sh@18 -- # minor=1 00:07:34.975 20:00:42 -- app/version.sh@19 -- # get_header_version patch 00:07:34.975 20:00:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.975 20:00:42 -- app/version.sh@14 -- # cut -f2 00:07:34.975 20:00:42 -- app/version.sh@14 -- # tr -d '"' 00:07:34.975 20:00:42 -- app/version.sh@19 -- # patch=1 00:07:34.975 20:00:42 -- app/version.sh@20 -- # get_header_version suffix 00:07:34.975 20:00:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.975 20:00:42 -- app/version.sh@14 -- # cut -f2 00:07:34.975 20:00:42 -- app/version.sh@14 -- # tr -d '"' 00:07:34.975 20:00:42 -- app/version.sh@20 -- # suffix=-pre 00:07:34.975 20:00:42 -- app/version.sh@22 -- # version=24.1 00:07:34.975 20:00:42 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:34.975 20:00:42 -- app/version.sh@25 -- # version=24.1.1 00:07:34.975 20:00:42 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:34.975 20:00:42 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:34.975 20:00:42 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:34.975 20:00:42 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:34.975 20:00:42 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:34.975 00:07:34.975 real 0m0.203s 00:07:34.975 user 0m0.126s 00:07:34.975 sys 0m0.105s 00:07:34.975 20:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.975 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.975 ************************************ 00:07:34.975 END TEST version 00:07:34.975 ************************************ 00:07:34.975 20:00:42 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:34.975 20:00:42 -- spdk/autotest.sh@191 -- # uname -s 00:07:34.975 20:00:42 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:34.975 20:00:42 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:34.975 20:00:42 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:34.976 20:00:42 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:34.976 20:00:42 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:34.976 20:00:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:34.976 20:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.976 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.976 ************************************ 00:07:34.976 START TEST blockdev_nvme 00:07:34.976 ************************************ 00:07:34.976 20:00:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:35.238 * Looking for test storage... 00:07:35.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:35.238 20:00:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:35.238 20:00:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:35.238 20:00:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:35.238 20:00:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:35.238 20:00:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:35.238 20:00:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:35.238 20:00:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:35.238 20:00:42 -- scripts/common.sh@335 -- # IFS=.-: 00:07:35.238 20:00:42 -- scripts/common.sh@335 -- # read -ra ver1 00:07:35.238 20:00:42 -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.238 20:00:42 -- scripts/common.sh@336 -- # read -ra ver2 00:07:35.238 20:00:42 -- scripts/common.sh@337 -- # local 'op=<' 00:07:35.238 20:00:42 -- scripts/common.sh@339 -- # ver1_l=2 00:07:35.238 20:00:42 -- scripts/common.sh@340 -- # ver2_l=1 00:07:35.238 20:00:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:35.238 20:00:42 -- scripts/common.sh@343 -- # case "$op" in 00:07:35.238 20:00:42 -- scripts/common.sh@344 -- # : 1 00:07:35.238 20:00:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:35.238 20:00:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.238 20:00:42 -- scripts/common.sh@364 -- # decimal 1 00:07:35.238 20:00:42 -- scripts/common.sh@352 -- # local d=1 00:07:35.238 20:00:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.238 20:00:42 -- scripts/common.sh@354 -- # echo 1 00:07:35.238 20:00:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:35.238 20:00:42 -- scripts/common.sh@365 -- # decimal 2 00:07:35.238 20:00:42 -- scripts/common.sh@352 -- # local d=2 00:07:35.238 20:00:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.238 20:00:42 -- scripts/common.sh@354 -- # echo 2 00:07:35.238 20:00:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:35.238 20:00:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:35.238 20:00:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:35.238 20:00:42 -- scripts/common.sh@367 -- # return 0 00:07:35.238 20:00:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.238 20:00:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.238 --rc genhtml_branch_coverage=1 00:07:35.238 --rc genhtml_function_coverage=1 00:07:35.238 --rc genhtml_legend=1 00:07:35.238 --rc geninfo_all_blocks=1 00:07:35.238 --rc geninfo_unexecuted_blocks=1 00:07:35.238 00:07:35.238 ' 00:07:35.238 20:00:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.238 --rc genhtml_branch_coverage=1 00:07:35.238 --rc genhtml_function_coverage=1 00:07:35.238 --rc genhtml_legend=1 00:07:35.238 --rc geninfo_all_blocks=1 00:07:35.238 --rc geninfo_unexecuted_blocks=1 00:07:35.238 00:07:35.238 ' 00:07:35.238 20:00:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.238 --rc genhtml_branch_coverage=1 00:07:35.238 --rc genhtml_function_coverage=1 00:07:35.238 --rc genhtml_legend=1 00:07:35.238 --rc geninfo_all_blocks=1 00:07:35.238 --rc geninfo_unexecuted_blocks=1 00:07:35.238 00:07:35.238 ' 00:07:35.238 20:00:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.238 --rc genhtml_branch_coverage=1 00:07:35.238 --rc genhtml_function_coverage=1 00:07:35.238 --rc genhtml_legend=1 00:07:35.238 --rc geninfo_all_blocks=1 00:07:35.238 --rc geninfo_unexecuted_blocks=1 00:07:35.238 00:07:35.238 ' 00:07:35.238 20:00:42 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:35.238 20:00:42 -- bdev/nbd_common.sh@6 -- # set -e 00:07:35.238 20:00:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:35.238 20:00:42 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.238 20:00:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:35.238 20:00:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:35.238 20:00:42 -- bdev/blockdev.sh@18 -- # : 00:07:35.238 20:00:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:35.238 20:00:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:35.238 20:00:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:35.238 20:00:42 -- bdev/blockdev.sh@672 -- # uname -s 00:07:35.238 20:00:42 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:35.238 20:00:42 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:35.238 20:00:42 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:35.238 20:00:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:35.238 20:00:42 -- bdev/blockdev.sh@682 -- # dek= 00:07:35.238 20:00:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:35.238 20:00:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:35.238 20:00:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:35.238 20:00:42 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:35.238 20:00:42 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:35.238 20:00:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:35.238 20:00:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60187 00:07:35.238 20:00:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:35.238 20:00:42 -- bdev/blockdev.sh@47 -- # waitforlisten 60187 00:07:35.238 20:00:42 -- common/autotest_common.sh@829 -- # '[' -z 60187 ']' 00:07:35.238 20:00:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.238 20:00:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.238 20:00:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.238 20:00:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.238 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.238 20:00:42 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:35.238 [2024-12-16 20:00:42.795722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.238 [2024-12-16 20:00:42.795924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:07:35.500 [2024-12-16 20:00:42.948399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.500 [2024-12-16 20:00:43.104017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.500 [2024-12-16 20:00:43.104173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.071 20:00:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.071 20:00:43 -- common/autotest_common.sh@862 -- # return 0 00:07:36.071 20:00:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:36.071 20:00:43 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:36.071 20:00:43 -- bdev/blockdev.sh@79 -- # local json 00:07:36.071 20:00:43 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:36.071 20:00:43 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.072 20:00:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:36.072 20:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.072 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.333 20:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.333 20:00:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:36.333 20:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.333 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.333 20:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.333 20:00:43 -- bdev/blockdev.sh@738 -- # cat 00:07:36.333 20:00:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:36.333 20:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.333 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.333 20:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.333 20:00:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:36.333 20:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.333 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.333 20:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.333 20:00:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:36.333 20:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.333 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.333 20:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.333 20:00:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:36.333 20:00:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:36.333 20:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.333 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.333 20:00:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:36.595 20:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.595 20:00:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:36.595 20:00:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:36.596 20:00:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8c65f7a4-8eb6-4cdf-bc7a-2e42210488d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8c65f7a4-8eb6-4cdf-bc7a-2e42210488d8",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5d555e13-11ff-4b2f-8ec6-46d5bd7ddae6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5d555e13-11ff-4b2f-8ec6-46d5bd7ddae6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cf521e7e-75c3-42ad-877b-b53f7e3b0d8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cf521e7e-75c3-42ad-877b-b53f7e3b0d8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b0d0134c-f42c-47ac-b343-4d006cbf7352"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0d0134c-f42c-47ac-b343-4d006cbf7352",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c6c09348-00fe-49c8-9ff9-77d5d4e034fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6c09348-00fe-49c8-9ff9-77d5d4e034fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ffb54779-d1a9-4261-b45d-3a7f349d358c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ffb54779-d1a9-4261-b45d-3a7f349d358c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:36.596 20:00:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:36.596 20:00:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:36.596 20:00:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:36.596 20:00:44 -- bdev/blockdev.sh@752 -- # killprocess 60187 00:07:36.596 20:00:44 -- common/autotest_common.sh@936 -- # '[' -z 60187 ']' 00:07:36.596 20:00:44 -- common/autotest_common.sh@940 -- # kill -0 60187 00:07:36.596 20:00:44 -- common/autotest_common.sh@941 -- # uname 00:07:36.596 20:00:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.596 20:00:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60187 00:07:36.596 killing process with pid 60187 00:07:36.596 20:00:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:36.596 20:00:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:36.596 20:00:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60187' 00:07:36.596 20:00:44 -- common/autotest_common.sh@955 -- # kill 60187 00:07:36.596 20:00:44 -- common/autotest_common.sh@960 -- # wait 60187 00:07:37.983 20:00:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:37.983 20:00:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:37.983 20:00:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:37.983 20:00:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.983 20:00:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.983 ************************************ 00:07:37.983 START TEST bdev_hello_world 00:07:37.983 ************************************ 00:07:37.983 20:00:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:37.983 [2024-12-16 20:00:45.299616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.983 [2024-12-16 20:00:45.299736] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60260 ] 00:07:37.983 [2024-12-16 20:00:45.446784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.983 [2024-12-16 20:00:45.595739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.553 [2024-12-16 20:00:46.063943] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:38.553 [2024-12-16 20:00:46.063987] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:38.553 [2024-12-16 20:00:46.064002] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:38.553 [2024-12-16 20:00:46.065906] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:38.553 [2024-12-16 20:00:46.066198] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:38.553 [2024-12-16 20:00:46.066222] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:38.553 [2024-12-16 20:00:46.066450] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:38.553 00:07:38.553 [2024-12-16 20:00:46.066475] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:39.124 00:07:39.124 real 0m1.448s 00:07:39.124 user 0m1.184s 00:07:39.124 sys 0m0.158s 00:07:39.124 20:00:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.124 ************************************ 00:07:39.124 END TEST bdev_hello_world 00:07:39.124 ************************************ 00:07:39.124 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:07:39.124 20:00:46 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:39.124 20:00:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.124 20:00:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.125 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:07:39.125 ************************************ 00:07:39.125 START TEST bdev_bounds 00:07:39.125 ************************************ 00:07:39.125 20:00:46 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:39.125 Process bdevio pid: 60302 00:07:39.125 20:00:46 -- bdev/blockdev.sh@288 -- # bdevio_pid=60302 00:07:39.125 20:00:46 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.125 20:00:46 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60302' 00:07:39.125 20:00:46 -- bdev/blockdev.sh@291 -- # waitforlisten 60302 00:07:39.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.125 20:00:46 -- common/autotest_common.sh@829 -- # '[' -z 60302 ']' 00:07:39.125 20:00:46 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:39.125 20:00:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.125 20:00:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.125 20:00:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.125 20:00:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.125 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:07:39.385 [2024-12-16 20:00:46.802340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.385 [2024-12-16 20:00:46.802461] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:07:39.385 [2024-12-16 20:00:46.951349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.646 [2024-12-16 20:00:47.102865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.646 [2024-12-16 20:00:47.103515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.646 [2024-12-16 20:00:47.103537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.217 20:00:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.217 20:00:47 -- common/autotest_common.sh@862 -- # return 0 00:07:40.217 20:00:47 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:40.217 I/O targets: 00:07:40.217 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:40.217 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:40.217 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:40.217 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:40.217 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:40.217 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:40.217 00:07:40.217 00:07:40.217 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.217 http://cunit.sourceforge.net/ 00:07:40.217 00:07:40.217 00:07:40.217 Suite: bdevio tests on: Nvme3n1 00:07:40.217 Test: blockdev write read block ...passed 00:07:40.217 Test: blockdev write zeroes read block ...passed 00:07:40.217 Test: blockdev write zeroes read no split ...passed 00:07:40.217 Test: blockdev write zeroes read split ...passed 00:07:40.217 Test: blockdev write zeroes read split partial ...passed 00:07:40.217 Test: blockdev reset ...[2024-12-16 20:00:47.773971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:40.217 [2024-12-16 20:00:47.776646] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.217 passed 00:07:40.217 Test: blockdev write read 8 blocks ...passed 00:07:40.217 Test: blockdev write read size > 128k ...passed 00:07:40.217 Test: blockdev write read invalid size ...passed 00:07:40.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.217 Test: blockdev write read max offset ...passed 00:07:40.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.217 Test: blockdev writev readv 8 blocks ...passed 00:07:40.217 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.217 Test: blockdev writev readv block ...passed 00:07:40.217 Test: blockdev writev readv size > 128k ...passed 00:07:40.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.217 Test: blockdev comparev and writev ...[2024-12-16 20:00:47.783486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27020e000 len:0x1000 00:07:40.217 [2024-12-16 20:00:47.783608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.217 passed 00:07:40.217 Test: blockdev nvme passthru rw ...passed 00:07:40.217 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:00:47.784223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.217 [2024-12-16 20:00:47.784310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.217 passed 00:07:40.217 Test: blockdev nvme admin passthru ...passed 00:07:40.217 Test: blockdev copy ...passed 00:07:40.217 Suite: bdevio tests on: Nvme2n3 00:07:40.217 Test: blockdev write read block ...passed 00:07:40.217 Test: blockdev write zeroes read block ...passed 00:07:40.217 Test: blockdev write zeroes read no split ...passed 00:07:40.217 Test: blockdev write zeroes read split ...passed 00:07:40.217 Test: blockdev write zeroes read split partial ...passed 00:07:40.217 Test: blockdev reset ...[2024-12-16 20:00:47.843304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:40.217 [2024-12-16 20:00:47.846006] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.217 passed 00:07:40.217 Test: blockdev write read 8 blocks ...passed 00:07:40.217 Test: blockdev write read size > 128k ...passed 00:07:40.217 Test: blockdev write read invalid size ...passed 00:07:40.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.217 Test: blockdev write read max offset ...passed 00:07:40.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.217 Test: blockdev writev readv 8 blocks ...passed 00:07:40.217 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.217 Test: blockdev writev readv block ...passed 00:07:40.217 Test: blockdev writev readv size > 128k ...passed 00:07:40.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.217 Test: blockdev comparev and writev ...[2024-12-16 20:00:47.852735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27020a000 len:0x1000 00:07:40.217 [2024-12-16 20:00:47.852834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.217 passed 00:07:40.217 Test: blockdev nvme passthru rw ...passed 00:07:40.217 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:00:47.853511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.217 passed 00:07:40.217 Test: blockdev nvme admin passthru ...[2024-12-16 20:00:47.853583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.478 passed 00:07:40.478 Test: blockdev copy ...passed 00:07:40.478 Suite: bdevio tests on: Nvme2n2 00:07:40.478 Test: blockdev write read block ...passed 00:07:40.478 Test: blockdev write zeroes read block ...passed 00:07:40.478 Test: blockdev write zeroes read no split ...passed 00:07:40.478 Test: blockdev write zeroes read split ...passed 00:07:40.478 Test: blockdev write zeroes read split partial ...passed 00:07:40.478 Test: blockdev reset ...[2024-12-16 20:00:47.909410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:40.478 [2024-12-16 20:00:47.911894] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.478 passed 00:07:40.478 Test: blockdev write read 8 blocks ...passed 00:07:40.478 Test: blockdev write read size > 128k ...passed 00:07:40.478 Test: blockdev write read invalid size ...passed 00:07:40.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.478 Test: blockdev write read max offset ...passed 00:07:40.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.478 Test: blockdev writev readv 8 blocks ...passed 00:07:40.478 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.478 Test: blockdev writev readv block ...passed 00:07:40.478 Test: blockdev writev readv size > 128k ...passed 00:07:40.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.478 Test: blockdev comparev and writev ...[2024-12-16 20:00:47.918283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279c06000 len:0x1000 00:07:40.478 [2024-12-16 20:00:47.918390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.478 passed 00:07:40.478 Test: blockdev nvme passthru rw ...passed 00:07:40.478 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:00:47.918918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.478 [2024-12-16 20:00:47.918981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.478 passed 00:07:40.478 Test: blockdev nvme admin passthru ...passed 00:07:40.478 Test: blockdev copy ...passed 00:07:40.478 Suite: bdevio tests on: Nvme2n1 00:07:40.478 Test: blockdev write read block ...passed 00:07:40.478 Test: blockdev write zeroes read block ...passed 00:07:40.478 Test: blockdev write zeroes read no split ...passed 00:07:40.478 Test: blockdev write zeroes read split ...passed 00:07:40.478 Test: blockdev write zeroes read split partial ...passed 00:07:40.478 Test: blockdev reset ...[2024-12-16 20:00:47.962137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:40.478 [2024-12-16 20:00:47.966226] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.478 passed 00:07:40.478 Test: blockdev write read 8 blocks ...passed 00:07:40.478 Test: blockdev write read size > 128k ...passed 00:07:40.478 Test: blockdev write read invalid size ...passed 00:07:40.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.478 Test: blockdev write read max offset ...passed 00:07:40.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.478 Test: blockdev writev readv 8 blocks ...passed 00:07:40.478 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.478 Test: blockdev writev readv block ...passed 00:07:40.478 Test: blockdev writev readv size > 128k ...passed 00:07:40.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.478 Test: blockdev comparev and writev ...[2024-12-16 20:00:47.978914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279c01000 len:0x1000 00:07:40.478 [2024-12-16 20:00:47.979011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.478 passed 00:07:40.478 Test: blockdev nvme passthru rw ...passed 00:07:40.478 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:00:47.981533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.478 [2024-12-16 20:00:47.981614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.478 passed 00:07:40.478 Test: blockdev nvme admin passthru ...passed 00:07:40.478 Test: blockdev copy ...passed 00:07:40.478 Suite: bdevio tests on: Nvme1n1 00:07:40.478 Test: blockdev write read block ...passed 00:07:40.478 Test: blockdev write zeroes read block ...passed 00:07:40.478 Test: blockdev write zeroes read no split ...passed 00:07:40.478 Test: blockdev write zeroes read split ...passed 00:07:40.478 Test: blockdev write zeroes read split partial ...passed 00:07:40.478 Test: blockdev reset ...[2024-12-16 20:00:48.033148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:40.478 [2024-12-16 20:00:48.038649] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.478 passed 00:07:40.478 Test: blockdev write read 8 blocks ...passed 00:07:40.478 Test: blockdev write read size > 128k ...passed 00:07:40.478 Test: blockdev write read invalid size ...passed 00:07:40.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.478 Test: blockdev write read max offset ...passed 00:07:40.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.478 Test: blockdev writev readv 8 blocks ...passed 00:07:40.478 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.478 Test: blockdev writev readv block ...passed 00:07:40.478 Test: blockdev writev readv size > 128k ...passed 00:07:40.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.479 Test: blockdev comparev and writev ...[2024-12-16 20:00:48.055951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26aa06000 len:0x1000 00:07:40.479 [2024-12-16 20:00:48.056054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.479 passed 00:07:40.479 Test: blockdev nvme passthru rw ...passed 00:07:40.479 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:00:48.058098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.479 [2024-12-16 20:00:48.058179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.479 passed 00:07:40.479 Test: blockdev nvme admin passthru ...passed 00:07:40.479 Test: blockdev copy ...passed 00:07:40.479 Suite: bdevio tests on: Nvme0n1 00:07:40.479 Test: blockdev write read block ...passed 00:07:40.479 Test: blockdev write zeroes read block ...passed 00:07:40.479 Test: blockdev write zeroes read no split ...passed 00:07:40.479 Test: blockdev write zeroes read split ...passed 00:07:40.740 Test: blockdev write zeroes read split partial ...passed 00:07:40.740 Test: blockdev reset ...[2024-12-16 20:00:48.118542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:40.740 [2024-12-16 20:00:48.121780] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.740 passed 00:07:40.740 Test: blockdev write read 8 blocks ...passed 00:07:40.740 Test: blockdev write read size > 128k ...passed 00:07:40.740 Test: blockdev write read invalid size ...passed 00:07:40.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.740 Test: blockdev write read max offset ...passed 00:07:40.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.740 Test: blockdev writev readv 8 blocks ...passed 00:07:40.740 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.740 Test: blockdev writev readv block ...passed 00:07:40.740 Test: blockdev writev readv size > 128k ...passed 00:07:40.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.740 Test: blockdev comparev and writev ...[2024-12-16 20:00:48.129343] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:40.740 separate metadata which is not supported yet. 00:07:40.740 passed 00:07:40.740 Test: blockdev nvme passthru rw ...passed 00:07:40.740 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:00:48.129952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:40.740 [2024-12-16 20:00:48.130036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:40.740 passed 00:07:40.740 Test: blockdev nvme admin passthru ...passed 00:07:40.740 Test: blockdev copy ...passed 00:07:40.740 00:07:40.740 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.740 suites 6 6 n/a 0 0 00:07:40.740 tests 138 138 138 0 0 00:07:40.740 asserts 893 893 893 0 n/a 00:07:40.740 00:07:40.740 Elapsed time = 1.095 seconds 00:07:40.740 0 00:07:40.741 20:00:48 -- bdev/blockdev.sh@293 -- # killprocess 60302 00:07:40.741 20:00:48 -- common/autotest_common.sh@936 -- # '[' -z 60302 ']' 00:07:40.741 20:00:48 -- common/autotest_common.sh@940 -- # kill -0 60302 00:07:40.741 20:00:48 -- common/autotest_common.sh@941 -- # uname 00:07:40.741 20:00:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.741 20:00:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60302 00:07:40.741 20:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.741 killing process with pid 60302 00:07:40.741 20:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.741 20:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60302' 00:07:40.741 20:00:48 -- common/autotest_common.sh@955 -- # kill 60302 00:07:40.741 20:00:48 -- common/autotest_common.sh@960 -- # wait 60302 00:07:41.312 20:00:48 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:41.312 00:07:41.312 real 0m2.192s 00:07:41.312 user 0m5.353s 00:07:41.312 sys 0m0.273s 00:07:41.312 20:00:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.312 ************************************ 00:07:41.312 END TEST bdev_bounds 00:07:41.312 ************************************ 00:07:41.312 20:00:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.573 20:00:48 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:41.573 20:00:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:41.573 20:00:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.573 20:00:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.573 ************************************ 00:07:41.573 START TEST bdev_nbd 00:07:41.573 ************************************ 00:07:41.573 20:00:49 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:41.573 20:00:49 -- bdev/blockdev.sh@298 -- # uname -s 00:07:41.573 20:00:49 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:41.573 20:00:49 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.573 20:00:49 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.573 20:00:49 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.573 20:00:49 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:41.573 20:00:49 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:41.573 20:00:49 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:41.573 20:00:49 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:41.573 20:00:49 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:41.573 20:00:49 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:41.573 20:00:49 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:41.573 20:00:49 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:41.573 20:00:49 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.573 20:00:49 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:41.573 20:00:49 -- bdev/blockdev.sh@316 -- # nbd_pid=60356 00:07:41.573 20:00:49 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:41.573 20:00:49 -- bdev/blockdev.sh@318 -- # waitforlisten 60356 /var/tmp/spdk-nbd.sock 00:07:41.573 20:00:49 -- common/autotest_common.sh@829 -- # '[' -z 60356 ']' 00:07:41.573 20:00:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:41.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:41.573 20:00:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.573 20:00:49 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.573 20:00:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:41.573 20:00:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.573 20:00:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.573 [2024-12-16 20:00:49.076885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.573 [2024-12-16 20:00:49.077029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.833 [2024-12-16 20:00:49.232554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.833 [2024-12-16 20:00:49.470441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.217 20:00:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.217 20:00:50 -- common/autotest_common.sh@862 -- # return 0 00:07:43.217 20:00:50 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@24 -- # local i 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:43.217 20:00:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:43.217 20:00:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:43.217 20:00:50 -- common/autotest_common.sh@867 -- # local i 00:07:43.217 20:00:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.217 20:00:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.217 20:00:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:43.478 20:00:50 -- common/autotest_common.sh@871 -- # break 00:07:43.478 20:00:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.478 20:00:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.478 20:00:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.478 1+0 records in 00:07:43.478 1+0 records out 00:07:43.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877646 s, 4.7 MB/s 00:07:43.478 20:00:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.478 20:00:50 -- common/autotest_common.sh@884 -- # size=4096 00:07:43.478 20:00:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.478 20:00:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.478 20:00:50 -- common/autotest_common.sh@887 -- # return 0 00:07:43.478 20:00:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.478 20:00:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:43.478 20:00:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:43.478 20:00:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:43.478 20:00:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:43.478 20:00:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:43.478 20:00:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:43.478 20:00:51 -- common/autotest_common.sh@867 -- # local i 00:07:43.478 20:00:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.478 20:00:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.478 20:00:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:43.478 20:00:51 -- common/autotest_common.sh@871 -- # break 00:07:43.478 20:00:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.478 20:00:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.478 20:00:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.478 1+0 records in 00:07:43.478 1+0 records out 00:07:43.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611452 s, 6.7 MB/s 00:07:43.478 20:00:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.478 20:00:51 -- common/autotest_common.sh@884 -- # size=4096 00:07:43.478 20:00:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.478 20:00:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.478 20:00:51 -- common/autotest_common.sh@887 -- # return 0 00:07:43.478 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.478 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:43.478 20:00:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:43.738 20:00:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:43.738 20:00:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:43.738 20:00:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:43.738 20:00:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:43.738 20:00:51 -- common/autotest_common.sh@867 -- # local i 00:07:43.738 20:00:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.738 20:00:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.738 20:00:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:43.738 20:00:51 -- common/autotest_common.sh@871 -- # break 00:07:43.738 20:00:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.738 20:00:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.738 20:00:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.738 1+0 records in 00:07:43.738 1+0 records out 00:07:43.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781896 s, 5.2 MB/s 00:07:43.738 20:00:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.738 20:00:51 -- common/autotest_common.sh@884 -- # size=4096 00:07:43.738 20:00:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.738 20:00:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.738 20:00:51 -- common/autotest_common.sh@887 -- # return 0 00:07:43.738 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.738 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:43.738 20:00:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:44.000 20:00:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:44.000 20:00:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:44.000 20:00:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:44.000 20:00:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:44.000 20:00:51 -- common/autotest_common.sh@867 -- # local i 00:07:44.000 20:00:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:44.000 20:00:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:44.000 20:00:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:44.000 20:00:51 -- common/autotest_common.sh@871 -- # break 00:07:44.000 20:00:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:44.000 20:00:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:44.000 20:00:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.000 1+0 records in 00:07:44.000 1+0 records out 00:07:44.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103296 s, 4.0 MB/s 00:07:44.000 20:00:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.000 20:00:51 -- common/autotest_common.sh@884 -- # size=4096 00:07:44.000 20:00:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.000 20:00:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:44.000 20:00:51 -- common/autotest_common.sh@887 -- # return 0 00:07:44.000 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.000 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.000 20:00:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:44.260 20:00:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:44.260 20:00:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:44.260 20:00:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:44.260 20:00:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:44.260 20:00:51 -- common/autotest_common.sh@867 -- # local i 00:07:44.260 20:00:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:44.260 20:00:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:44.260 20:00:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:44.260 20:00:51 -- common/autotest_common.sh@871 -- # break 00:07:44.260 20:00:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:44.260 20:00:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:44.260 20:00:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.260 1+0 records in 00:07:44.260 1+0 records out 00:07:44.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889329 s, 4.6 MB/s 00:07:44.260 20:00:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.260 20:00:51 -- common/autotest_common.sh@884 -- # size=4096 00:07:44.260 20:00:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.261 20:00:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:44.261 20:00:51 -- common/autotest_common.sh@887 -- # return 0 00:07:44.261 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.261 20:00:51 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.261 20:00:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:44.523 20:00:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:44.523 20:00:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:44.523 20:00:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:44.523 20:00:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:44.523 20:00:52 -- common/autotest_common.sh@867 -- # local i 00:07:44.523 20:00:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:44.523 20:00:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:44.523 20:00:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:44.523 20:00:52 -- common/autotest_common.sh@871 -- # break 00:07:44.523 20:00:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:44.523 20:00:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:44.523 20:00:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.523 1+0 records in 00:07:44.523 1+0 records out 00:07:44.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106358 s, 3.9 MB/s 00:07:44.523 20:00:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.523 20:00:52 -- common/autotest_common.sh@884 -- # size=4096 00:07:44.523 20:00:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.523 20:00:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:44.523 20:00:52 -- common/autotest_common.sh@887 -- # return 0 00:07:44.523 20:00:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.523 20:00:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.523 20:00:52 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd0", 00:07:44.782 "bdev_name": "Nvme0n1" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd1", 00:07:44.782 "bdev_name": "Nvme1n1" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd2", 00:07:44.782 "bdev_name": "Nvme2n1" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd3", 00:07:44.782 "bdev_name": "Nvme2n2" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd4", 00:07:44.782 "bdev_name": "Nvme2n3" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd5", 00:07:44.782 "bdev_name": "Nvme3n1" 00:07:44.782 } 00:07:44.782 ]' 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd0", 00:07:44.782 "bdev_name": "Nvme0n1" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd1", 00:07:44.782 "bdev_name": "Nvme1n1" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd2", 00:07:44.782 "bdev_name": "Nvme2n1" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd3", 00:07:44.782 "bdev_name": "Nvme2n2" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd4", 00:07:44.782 "bdev_name": "Nvme2n3" 00:07:44.782 }, 00:07:44.782 { 00:07:44.782 "nbd_device": "/dev/nbd5", 00:07:44.782 "bdev_name": "Nvme3n1" 00:07:44.782 } 00:07:44.782 ]' 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@51 -- # local i 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.782 20:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@41 -- # break 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.043 20:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@41 -- # break 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@41 -- # break 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.305 20:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@41 -- # break 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.566 20:00:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@41 -- # break 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.829 20:00:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@41 -- # break 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.090 20:00:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@65 -- # true 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@122 -- # count=0 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@127 -- # return 0 00:07:46.351 20:00:53 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@12 -- # local i 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:46.351 20:00:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:46.612 /dev/nbd0 00:07:46.612 20:00:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:46.612 20:00:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:46.612 20:00:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:46.612 20:00:54 -- common/autotest_common.sh@867 -- # local i 00:07:46.612 20:00:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.612 20:00:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.612 20:00:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:46.612 20:00:54 -- common/autotest_common.sh@871 -- # break 00:07:46.612 20:00:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.612 20:00:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.612 20:00:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.612 1+0 records in 00:07:46.612 1+0 records out 00:07:46.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076043 s, 5.4 MB/s 00:07:46.612 20:00:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.612 20:00:54 -- common/autotest_common.sh@884 -- # size=4096 00:07:46.612 20:00:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.612 20:00:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.612 20:00:54 -- common/autotest_common.sh@887 -- # return 0 00:07:46.612 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.612 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:46.612 20:00:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:46.612 /dev/nbd1 00:07:46.874 20:00:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:46.874 20:00:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:46.874 20:00:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:46.874 20:00:54 -- common/autotest_common.sh@867 -- # local i 00:07:46.874 20:00:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.874 20:00:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.874 20:00:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:46.874 20:00:54 -- common/autotest_common.sh@871 -- # break 00:07:46.875 20:00:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.875 20:00:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.875 20:00:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.875 1+0 records in 00:07:46.875 1+0 records out 00:07:46.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108431 s, 3.8 MB/s 00:07:46.875 20:00:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.875 20:00:54 -- common/autotest_common.sh@884 -- # size=4096 00:07:46.875 20:00:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.875 20:00:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.875 20:00:54 -- common/autotest_common.sh@887 -- # return 0 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:46.875 /dev/nbd10 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:46.875 20:00:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:46.875 20:00:54 -- common/autotest_common.sh@867 -- # local i 00:07:46.875 20:00:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.875 20:00:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.875 20:00:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:46.875 20:00:54 -- common/autotest_common.sh@871 -- # break 00:07:46.875 20:00:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.875 20:00:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.875 20:00:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.875 1+0 records in 00:07:46.875 1+0 records out 00:07:46.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046388 s, 8.8 MB/s 00:07:46.875 20:00:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.875 20:00:54 -- common/autotest_common.sh@884 -- # size=4096 00:07:46.875 20:00:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.875 20:00:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.875 20:00:54 -- common/autotest_common.sh@887 -- # return 0 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:46.875 20:00:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:47.136 /dev/nbd11 00:07:47.136 20:00:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:47.136 20:00:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:47.136 20:00:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:47.136 20:00:54 -- common/autotest_common.sh@867 -- # local i 00:07:47.136 20:00:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.136 20:00:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.136 20:00:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:47.136 20:00:54 -- common/autotest_common.sh@871 -- # break 00:07:47.136 20:00:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.136 20:00:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.136 20:00:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.136 1+0 records in 00:07:47.136 1+0 records out 00:07:47.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110774 s, 3.7 MB/s 00:07:47.136 20:00:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.136 20:00:54 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.136 20:00:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.136 20:00:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.136 20:00:54 -- common/autotest_common.sh@887 -- # return 0 00:07:47.136 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.136 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.136 20:00:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:47.398 /dev/nbd12 00:07:47.398 20:00:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:47.398 20:00:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:47.398 20:00:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:47.398 20:00:54 -- common/autotest_common.sh@867 -- # local i 00:07:47.398 20:00:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.398 20:00:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.398 20:00:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:47.398 20:00:54 -- common/autotest_common.sh@871 -- # break 00:07:47.398 20:00:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.398 20:00:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.398 20:00:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.398 1+0 records in 00:07:47.398 1+0 records out 00:07:47.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052679 s, 7.8 MB/s 00:07:47.398 20:00:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.398 20:00:54 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.398 20:00:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.398 20:00:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.398 20:00:54 -- common/autotest_common.sh@887 -- # return 0 00:07:47.398 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.398 20:00:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.398 20:00:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:47.657 /dev/nbd13 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:47.657 20:00:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:47.657 20:00:55 -- common/autotest_common.sh@867 -- # local i 00:07:47.657 20:00:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.657 20:00:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.657 20:00:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:47.657 20:00:55 -- common/autotest_common.sh@871 -- # break 00:07:47.657 20:00:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.657 20:00:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.657 20:00:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.657 1+0 records in 00:07:47.657 1+0 records out 00:07:47.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000987741 s, 4.1 MB/s 00:07:47.657 20:00:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.657 20:00:55 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.657 20:00:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.657 20:00:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.657 20:00:55 -- common/autotest_common.sh@887 -- # return 0 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.657 20:00:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd0", 00:07:47.918 "bdev_name": "Nvme0n1" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd1", 00:07:47.918 "bdev_name": "Nvme1n1" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd10", 00:07:47.918 "bdev_name": "Nvme2n1" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd11", 00:07:47.918 "bdev_name": "Nvme2n2" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd12", 00:07:47.918 "bdev_name": "Nvme2n3" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd13", 00:07:47.918 "bdev_name": "Nvme3n1" 00:07:47.918 } 00:07:47.918 ]' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd0", 00:07:47.918 "bdev_name": "Nvme0n1" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd1", 00:07:47.918 "bdev_name": "Nvme1n1" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd10", 00:07:47.918 "bdev_name": "Nvme2n1" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd11", 00:07:47.918 "bdev_name": "Nvme2n2" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd12", 00:07:47.918 "bdev_name": "Nvme2n3" 00:07:47.918 }, 00:07:47.918 { 00:07:47.918 "nbd_device": "/dev/nbd13", 00:07:47.918 "bdev_name": "Nvme3n1" 00:07:47.918 } 00:07:47.918 ]' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:47.918 /dev/nbd1 00:07:47.918 /dev/nbd10 00:07:47.918 /dev/nbd11 00:07:47.918 /dev/nbd12 00:07:47.918 /dev/nbd13' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:47.918 /dev/nbd1 00:07:47.918 /dev/nbd10 00:07:47.918 /dev/nbd11 00:07:47.918 /dev/nbd12 00:07:47.918 /dev/nbd13' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@65 -- # count=6 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@95 -- # count=6 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:47.918 256+0 records in 00:07:47.918 256+0 records out 00:07:47.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00842694 s, 124 MB/s 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.918 20:00:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.179 256+0 records in 00:07:48.179 256+0 records out 00:07:48.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214165 s, 4.9 MB/s 00:07:48.179 20:00:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.179 20:00:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.440 256+0 records in 00:07:48.440 256+0 records out 00:07:48.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240174 s, 4.4 MB/s 00:07:48.440 20:00:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.440 20:00:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:48.440 256+0 records in 00:07:48.440 256+0 records out 00:07:48.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124889 s, 8.4 MB/s 00:07:48.440 20:00:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.440 20:00:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:48.700 256+0 records in 00:07:48.700 256+0 records out 00:07:48.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119647 s, 8.8 MB/s 00:07:48.700 20:00:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.700 20:00:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:48.700 256+0 records in 00:07:48.700 256+0 records out 00:07:48.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156212 s, 6.7 MB/s 00:07:48.700 20:00:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.700 20:00:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:48.960 256+0 records in 00:07:48.960 256+0 records out 00:07:48.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216805 s, 4.8 MB/s 00:07:48.960 20:00:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:48.960 20:00:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:48.960 20:00:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.960 20:00:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@51 -- # local i 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.961 20:00:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@41 -- # break 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.220 20:00:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@41 -- # break 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.481 20:00:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@41 -- # break 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.740 20:00:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.000 20:00:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.261 20:00:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:50.522 20:00:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:50.522 20:00:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:50.522 20:00:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.523 20:00:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@65 -- # true 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@65 -- # count=0 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:50.783 20:00:58 -- bdev/nbd_common.sh@104 -- # count=0 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@109 -- # return 0 00:07:50.784 20:00:58 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:50.784 20:00:58 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:51.044 malloc_lvol_verify 00:07:51.044 20:00:58 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:51.044 3bbda4cc-9fe2-4478-bf77-6a2719de815f 00:07:51.044 20:00:58 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:51.305 747671da-18fb-4d67-922b-804433df8f37 00:07:51.305 20:00:58 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:51.612 /dev/nbd0 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:51.612 mke2fs 1.47.0 (5-Feb-2023) 00:07:51.612 Discarding device blocks: 0/4096 done 00:07:51.612 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:51.612 00:07:51.612 Allocating group tables: 0/1 done 00:07:51.612 Writing inode tables: 0/1 done 00:07:51.612 Creating journal (1024 blocks): done 00:07:51.612 Writing superblocks and filesystem accounting information: 0/1 done 00:07:51.612 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@51 -- # local i 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.612 20:00:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@41 -- # break 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:51.874 20:00:59 -- bdev/nbd_common.sh@147 -- # return 0 00:07:51.874 20:00:59 -- bdev/blockdev.sh@324 -- # killprocess 60356 00:07:51.874 20:00:59 -- common/autotest_common.sh@936 -- # '[' -z 60356 ']' 00:07:51.874 20:00:59 -- common/autotest_common.sh@940 -- # kill -0 60356 00:07:51.874 20:00:59 -- common/autotest_common.sh@941 -- # uname 00:07:51.874 20:00:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:51.874 20:00:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60356 00:07:51.874 20:00:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:51.874 killing process with pid 60356 00:07:51.874 20:00:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:51.874 20:00:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60356' 00:07:51.874 20:00:59 -- common/autotest_common.sh@955 -- # kill 60356 00:07:51.874 20:00:59 -- common/autotest_common.sh@960 -- # wait 60356 00:07:52.814 20:01:00 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:52.814 00:07:52.814 real 0m11.294s 00:07:52.814 user 0m15.453s 00:07:52.814 sys 0m3.448s 00:07:52.814 20:01:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.814 ************************************ 00:07:52.814 END TEST bdev_nbd 00:07:52.814 ************************************ 00:07:52.814 20:01:00 -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 20:01:00 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:52.814 20:01:00 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:52.814 skipping fio tests on NVMe due to multi-ns failures. 00:07:52.814 20:01:00 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:52.814 20:01:00 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:52.814 20:01:00 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.814 20:01:00 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:52.814 20:01:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.814 20:01:00 -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 ************************************ 00:07:52.814 START TEST bdev_verify 00:07:52.814 ************************************ 00:07:52.814 20:01:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.814 [2024-12-16 20:01:00.419223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.814 [2024-12-16 20:01:00.419351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60748 ] 00:07:53.075 [2024-12-16 20:01:00.569237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.336 [2024-12-16 20:01:00.794207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.336 [2024-12-16 20:01:00.794390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.907 Running I/O for 5 seconds... 00:07:59.237 00:07:59.237 Latency(us) 00:07:59.237 [2024-12-16T20:01:06.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x0 length 0xbd0bd 00:07:59.237 Nvme0n1 : 5.04 2506.81 9.79 0.00 0.00 50913.43 9376.69 62511.26 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:59.237 Nvme0n1 : 5.04 2515.43 9.83 0.00 0.00 50715.63 10233.70 66544.25 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x0 length 0xa0000 00:07:59.237 Nvme1n1 : 5.05 2505.18 9.79 0.00 0.00 50900.66 11998.13 60898.07 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0xa0000 length 0xa0000 00:07:59.237 Nvme1n1 : 5.05 2519.11 9.84 0.00 0.00 50596.20 4587.52 58478.28 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x0 length 0x80000 00:07:59.237 Nvme2n1 : 5.05 2504.49 9.78 0.00 0.00 50845.50 10939.47 57268.38 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x80000 length 0x80000 00:07:59.237 Nvme2n1 : 5.06 2524.12 9.86 0.00 0.00 50400.69 3554.07 57268.38 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x0 length 0x80000 00:07:59.237 Nvme2n2 : 5.05 2509.87 9.80 0.00 0.00 50700.81 4486.70 57268.38 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x80000 length 0x80000 00:07:59.237 Nvme2n2 : 5.06 2523.52 9.86 0.00 0.00 50359.90 3705.30 59284.87 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x0 length 0x80000 00:07:59.237 Nvme2n3 : 5.06 2508.07 9.80 0.00 0.00 50664.45 7309.78 58478.28 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x80000 length 0x80000 00:07:59.237 Nvme2n3 : 5.06 2521.38 9.85 0.00 0.00 50264.78 7259.37 57671.68 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x0 length 0x20000 00:07:59.237 Nvme3n1 : 5.07 2504.69 9.78 0.00 0.00 50642.12 13611.32 58881.58 00:07:59.237 [2024-12-16T20:01:06.877Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.237 Verification LBA range: start 0x20000 length 0x20000 00:07:59.237 Nvme3n1 : 5.07 2527.47 9.87 0.00 0.00 50134.83 2974.33 56865.08 00:07:59.237 [2024-12-16T20:01:06.877Z] =================================================================================================================== 00:07:59.237 [2024-12-16T20:01:06.877Z] Total : 30170.13 117.85 0.00 0.00 50593.90 2974.33 66544.25 00:08:17.395 00:08:17.395 real 0m24.290s 00:08:17.395 user 0m32.075s 00:08:17.395 sys 0m0.596s 00:08:17.395 20:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.395 ************************************ 00:08:17.395 END TEST bdev_verify 00:08:17.395 ************************************ 00:08:17.395 20:01:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.395 20:01:24 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.395 20:01:24 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:17.395 20:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.395 20:01:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.395 ************************************ 00:08:17.395 START TEST bdev_verify_big_io 00:08:17.395 ************************************ 00:08:17.395 20:01:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.395 [2024-12-16 20:01:24.783556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.395 [2024-12-16 20:01:24.783798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60959 ] 00:08:17.395 [2024-12-16 20:01:24.931090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.664 [2024-12-16 20:01:25.111291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.664 [2024-12-16 20:01:25.111336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.261 Running I/O for 5 seconds... 00:08:24.823 00:08:24.824 Latency(us) 00:08:24.824 [2024-12-16T20:01:32.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x0 length 0xbd0b 00:08:24.824 Nvme0n1 : 5.31 352.52 22.03 0.00 0.00 360088.48 7360.20 483958.15 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:24.824 Nvme0n1 : 5.37 196.08 12.25 0.00 0.00 640891.62 27625.94 929199.66 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x0 length 0xa000 00:08:24.824 Nvme1n1 : 5.31 352.39 22.02 0.00 0.00 356601.60 7662.67 448467.89 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0xa000 length 0xa000 00:08:24.824 Nvme1n1 : 5.40 201.69 12.61 0.00 0.00 606959.56 26617.70 816276.09 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x0 length 0x8000 00:08:24.824 Nvme2n1 : 5.31 352.27 22.02 0.00 0.00 353126.49 8469.27 404911.66 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x8000 length 0x8000 00:08:24.824 Nvme2n1 : 5.40 201.63 12.60 0.00 0.00 591887.06 27222.65 706578.90 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x0 length 0x8000 00:08:24.824 Nvme2n2 : 5.32 352.15 22.01 0.00 0.00 349653.94 9275.86 367808.20 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x8000 length 0x8000 00:08:24.824 Nvme2n2 : 5.46 222.48 13.90 0.00 0.00 525993.63 18249.26 632371.99 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x0 length 0x8000 00:08:24.824 Nvme2n3 : 5.32 351.99 22.00 0.00 0.00 346182.57 10637.00 341997.10 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x8000 length 0x8000 00:08:24.824 Nvme2n3 : 5.53 267.13 16.70 0.00 0.00 430528.66 6452.78 606560.89 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x0 length 0x2000 00:08:24.824 Nvme3n1 : 5.32 358.76 22.42 0.00 0.00 336796.62 768.79 353289.45 00:08:24.824 [2024-12-16T20:01:32.464Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.824 Verification LBA range: start 0x2000 length 0x2000 00:08:24.824 Nvme3n1 : 5.60 340.72 21.29 0.00 0.00 333047.79 627.00 600108.11 00:08:24.824 [2024-12-16T20:01:32.464Z] =================================================================================================================== 00:08:24.824 [2024-12-16T20:01:32.464Z] Total : 3549.80 221.86 0.00 0.00 410413.90 627.00 929199.66 00:08:25.762 00:08:25.762 real 0m8.578s 00:08:25.762 user 0m15.453s 00:08:25.762 sys 0m0.248s 00:08:25.762 20:01:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.762 ************************************ 00:08:25.762 END TEST bdev_verify_big_io 00:08:25.762 ************************************ 00:08:25.762 20:01:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.762 20:01:33 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.762 20:01:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:25.762 20:01:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.762 20:01:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.762 ************************************ 00:08:25.762 START TEST bdev_write_zeroes 00:08:25.762 ************************************ 00:08:25.762 20:01:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.022 [2024-12-16 20:01:33.407458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.022 [2024-12-16 20:01:33.407627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61073 ] 00:08:26.022 [2024-12-16 20:01:33.557612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.280 [2024-12-16 20:01:33.721996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.849 Running I/O for 1 seconds... 00:08:27.795 00:08:27.795 Latency(us) 00:08:27.795 [2024-12-16T20:01:35.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.795 [2024-12-16T20:01:35.435Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.795 Nvme0n1 : 1.02 9537.19 37.25 0.00 0.00 13384.47 4738.76 26214.40 00:08:27.795 [2024-12-16T20:01:35.435Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.795 Nvme1n1 : 1.02 9526.21 37.21 0.00 0.00 13385.13 7461.02 20971.52 00:08:27.795 [2024-12-16T20:01:35.435Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.795 Nvme2n1 : 1.02 9502.29 37.12 0.00 0.00 13401.84 7612.26 21475.64 00:08:27.795 [2024-12-16T20:01:35.435Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.795 Nvme2n2 : 1.02 9491.31 37.08 0.00 0.00 13387.01 7410.61 21979.77 00:08:27.795 [2024-12-16T20:01:35.435Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.795 Nvme2n3 : 1.02 9514.93 37.17 0.00 0.00 13285.76 6604.01 21878.94 00:08:27.795 [2024-12-16T20:01:35.435Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.795 Nvme3n1 : 1.02 9441.63 36.88 0.00 0.00 13369.68 7713.08 21979.77 00:08:27.795 [2024-12-16T20:01:35.435Z] =================================================================================================================== 00:08:27.795 [2024-12-16T20:01:35.435Z] Total : 57013.55 222.71 0.00 0.00 13368.93 4738.76 26214.40 00:08:28.739 00:08:28.739 real 0m2.930s 00:08:28.739 user 0m2.599s 00:08:28.739 sys 0m0.213s 00:08:28.739 20:01:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.739 20:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:28.739 ************************************ 00:08:28.739 END TEST bdev_write_zeroes 00:08:28.739 ************************************ 00:08:28.739 20:01:36 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.739 20:01:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:28.739 20:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.739 20:01:36 -- common/autotest_common.sh@10 -- # set +x 00:08:28.739 ************************************ 00:08:28.739 START TEST bdev_json_nonenclosed 00:08:28.739 ************************************ 00:08:28.739 20:01:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.001 [2024-12-16 20:01:36.418941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.001 [2024-12-16 20:01:36.419079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61122 ] 00:08:29.001 [2024-12-16 20:01:36.572856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.262 [2024-12-16 20:01:36.832444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.262 [2024-12-16 20:01:36.832683] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:29.262 [2024-12-16 20:01:36.832716] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.834 00:08:29.834 real 0m0.825s 00:08:29.834 user 0m0.586s 00:08:29.834 sys 0m0.129s 00:08:29.834 ************************************ 00:08:29.834 END TEST bdev_json_nonenclosed 00:08:29.834 ************************************ 00:08:29.834 20:01:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.834 20:01:37 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 20:01:37 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.834 20:01:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:29.834 20:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.834 20:01:37 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 ************************************ 00:08:29.834 START TEST bdev_json_nonarray 00:08:29.834 ************************************ 00:08:29.834 20:01:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.834 [2024-12-16 20:01:37.308434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.834 [2024-12-16 20:01:37.308562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61152 ] 00:08:29.834 [2024-12-16 20:01:37.457965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.095 [2024-12-16 20:01:37.681193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.096 [2024-12-16 20:01:37.681417] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:30.096 [2024-12-16 20:01:37.681439] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.356 00:08:30.356 real 0m0.749s 00:08:30.356 user 0m0.511s 00:08:30.356 sys 0m0.131s 00:08:30.356 20:01:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.356 ************************************ 00:08:30.356 END TEST bdev_json_nonarray 00:08:30.356 ************************************ 00:08:30.356 20:01:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.618 20:01:38 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:30.618 20:01:38 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:30.618 20:01:38 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:30.618 20:01:38 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:30.618 20:01:38 -- bdev/blockdev.sh@809 -- # cleanup 00:08:30.618 20:01:38 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:30.618 20:01:38 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:30.618 20:01:38 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:30.618 20:01:38 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:30.618 20:01:38 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:30.618 20:01:38 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:30.618 ************************************ 00:08:30.618 END TEST blockdev_nvme 00:08:30.618 ************************************ 00:08:30.618 00:08:30.618 real 0m55.491s 00:08:30.618 user 1m15.969s 00:08:30.618 sys 0m5.924s 00:08:30.618 20:01:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.618 20:01:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.618 20:01:38 -- spdk/autotest.sh@206 -- # uname -s 00:08:30.618 20:01:38 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:30.618 20:01:38 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:30.618 20:01:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:30.618 20:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.618 20:01:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.618 ************************************ 00:08:30.618 START TEST blockdev_nvme_gpt 00:08:30.618 ************************************ 00:08:30.618 20:01:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:30.618 * Looking for test storage... 00:08:30.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:30.618 20:01:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:30.618 20:01:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:30.618 20:01:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:30.618 20:01:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:30.618 20:01:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:30.618 20:01:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:30.618 20:01:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:30.618 20:01:38 -- scripts/common.sh@335 -- # IFS=.-: 00:08:30.618 20:01:38 -- scripts/common.sh@335 -- # read -ra ver1 00:08:30.618 20:01:38 -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.618 20:01:38 -- scripts/common.sh@336 -- # read -ra ver2 00:08:30.618 20:01:38 -- scripts/common.sh@337 -- # local 'op=<' 00:08:30.618 20:01:38 -- scripts/common.sh@339 -- # ver1_l=2 00:08:30.618 20:01:38 -- scripts/common.sh@340 -- # ver2_l=1 00:08:30.618 20:01:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:30.618 20:01:38 -- scripts/common.sh@343 -- # case "$op" in 00:08:30.618 20:01:38 -- scripts/common.sh@344 -- # : 1 00:08:30.618 20:01:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:30.618 20:01:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.618 20:01:38 -- scripts/common.sh@364 -- # decimal 1 00:08:30.880 20:01:38 -- scripts/common.sh@352 -- # local d=1 00:08:30.880 20:01:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.880 20:01:38 -- scripts/common.sh@354 -- # echo 1 00:08:30.880 20:01:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:30.880 20:01:38 -- scripts/common.sh@365 -- # decimal 2 00:08:30.880 20:01:38 -- scripts/common.sh@352 -- # local d=2 00:08:30.880 20:01:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.880 20:01:38 -- scripts/common.sh@354 -- # echo 2 00:08:30.880 20:01:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:30.880 20:01:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:30.880 20:01:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:30.880 20:01:38 -- scripts/common.sh@367 -- # return 0 00:08:30.880 20:01:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.880 20:01:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.880 --rc genhtml_branch_coverage=1 00:08:30.880 --rc genhtml_function_coverage=1 00:08:30.880 --rc genhtml_legend=1 00:08:30.880 --rc geninfo_all_blocks=1 00:08:30.880 --rc geninfo_unexecuted_blocks=1 00:08:30.880 00:08:30.880 ' 00:08:30.880 20:01:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.880 --rc genhtml_branch_coverage=1 00:08:30.880 --rc genhtml_function_coverage=1 00:08:30.880 --rc genhtml_legend=1 00:08:30.880 --rc geninfo_all_blocks=1 00:08:30.880 --rc geninfo_unexecuted_blocks=1 00:08:30.880 00:08:30.880 ' 00:08:30.880 20:01:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.880 --rc genhtml_branch_coverage=1 00:08:30.880 --rc genhtml_function_coverage=1 00:08:30.880 --rc genhtml_legend=1 00:08:30.880 --rc geninfo_all_blocks=1 00:08:30.880 --rc geninfo_unexecuted_blocks=1 00:08:30.880 00:08:30.880 ' 00:08:30.880 20:01:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.880 --rc genhtml_branch_coverage=1 00:08:30.880 --rc genhtml_function_coverage=1 00:08:30.880 --rc genhtml_legend=1 00:08:30.880 --rc geninfo_all_blocks=1 00:08:30.880 --rc geninfo_unexecuted_blocks=1 00:08:30.880 00:08:30.880 ' 00:08:30.880 20:01:38 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:30.880 20:01:38 -- bdev/nbd_common.sh@6 -- # set -e 00:08:30.880 20:01:38 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:30.880 20:01:38 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:30.880 20:01:38 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:30.880 20:01:38 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:30.880 20:01:38 -- bdev/blockdev.sh@18 -- # : 00:08:30.880 20:01:38 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:30.880 20:01:38 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:30.880 20:01:38 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:30.880 20:01:38 -- bdev/blockdev.sh@672 -- # uname -s 00:08:30.880 20:01:38 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:30.880 20:01:38 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:30.880 20:01:38 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:30.880 20:01:38 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:30.880 20:01:38 -- bdev/blockdev.sh@682 -- # dek= 00:08:30.880 20:01:38 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:30.880 20:01:38 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:30.880 20:01:38 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:30.880 20:01:38 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:30.880 20:01:38 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:30.880 20:01:38 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:30.880 20:01:38 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61235 00:08:30.880 20:01:38 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:30.880 20:01:38 -- bdev/blockdev.sh@47 -- # waitforlisten 61235 00:08:30.880 20:01:38 -- common/autotest_common.sh@829 -- # '[' -z 61235 ']' 00:08:30.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.880 20:01:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.880 20:01:38 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:30.880 20:01:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.880 20:01:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.880 20:01:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.880 20:01:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.880 [2024-12-16 20:01:38.352194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.880 [2024-12-16 20:01:38.352350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:08:30.880 [2024-12-16 20:01:38.502867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.142 [2024-12-16 20:01:38.721770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.142 [2024-12-16 20:01:38.721996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.527 20:01:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.528 20:01:39 -- common/autotest_common.sh@862 -- # return 0 00:08:32.528 20:01:39 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:32.528 20:01:39 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:32.528 20:01:39 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.788 Waiting for block devices as requested 00:08:32.788 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.049 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.049 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.049 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.339 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:38.339 20:01:45 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:38.339 20:01:45 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:38.339 20:01:45 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:38.339 20:01:45 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:38.339 20:01:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:38.339 20:01:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:38.339 20:01:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:38.339 20:01:45 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:38.339 20:01:45 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:38.339 20:01:45 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:38.339 20:01:45 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:38.339 20:01:45 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:38.339 20:01:45 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:38.339 20:01:45 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:38.339 20:01:45 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:38.339 BYT; 00:08:38.339 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:38.339 20:01:45 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:38.339 BYT; 00:08:38.339 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:38.339 20:01:45 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:38.339 20:01:45 -- bdev/blockdev.sh@114 -- # break 00:08:38.339 20:01:45 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:38.339 20:01:45 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:38.339 20:01:45 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:38.339 20:01:45 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:38.339 20:01:45 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:38.339 20:01:45 -- scripts/common.sh@410 -- # local spdk_guid 00:08:38.339 20:01:45 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:38.339 20:01:45 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:38.339 20:01:45 -- scripts/common.sh@415 -- # IFS='()' 00:08:38.339 20:01:45 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:38.339 20:01:45 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:38.339 20:01:45 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:38.339 20:01:45 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:38.339 20:01:45 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:38.339 20:01:45 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:38.339 20:01:45 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:38.339 20:01:45 -- scripts/common.sh@422 -- # local spdk_guid 00:08:38.339 20:01:45 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:38.339 20:01:45 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:38.339 20:01:45 -- scripts/common.sh@427 -- # IFS='()' 00:08:38.339 20:01:45 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:38.339 20:01:45 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:38.339 20:01:45 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:38.339 20:01:45 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:38.339 20:01:45 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:38.339 20:01:45 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:38.339 20:01:45 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:39.274 The operation has completed successfully. 00:08:39.274 20:01:46 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:40.649 The operation has completed successfully. 00:08:40.649 20:01:47 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:40.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.166 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:41.166 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:41.166 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:41.166 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:41.166 20:01:48 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:41.166 20:01:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.166 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.166 [] 00:08:41.166 20:01:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.166 20:01:48 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:41.166 20:01:48 -- bdev/blockdev.sh@79 -- # local json 00:08:41.166 20:01:48 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:41.166 20:01:48 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:41.425 20:01:48 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:41.425 20:01:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.425 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 20:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.684 20:01:49 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:41.684 20:01:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.684 20:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 20:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.684 20:01:49 -- bdev/blockdev.sh@738 -- # cat 00:08:41.684 20:01:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:41.684 20:01:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.684 20:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 20:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.684 20:01:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:41.684 20:01:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.684 20:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 20:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.684 20:01:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:41.684 20:01:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.684 20:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 20:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.684 20:01:49 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:41.684 20:01:49 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:41.684 20:01:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.684 20:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 20:01:49 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:41.684 20:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.684 20:01:49 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:41.684 20:01:49 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:41.685 20:01:49 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b16c40f2-8658-4e79-99c9-d7983254ded0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b16c40f2-8658-4e79-99c9-d7983254ded0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "28e70c11-117a-4664-b64a-617c81d3b9ad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "28e70c11-117a-4664-b64a-617c81d3b9ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "07fa7b3c-9cfa-4dff-98c1-ef092f06d598"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "07fa7b3c-9cfa-4dff-98c1-ef092f06d598",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "950093d6-2046-454c-ac83-30db3c7a6dc8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "950093d6-2046-454c-ac83-30db3c7a6dc8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d23b200d-230f-47c0-9542-4211ada15eb1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d23b200d-230f-47c0-9542-4211ada15eb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:41.685 20:01:49 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:41.685 20:01:49 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:41.685 20:01:49 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:41.685 20:01:49 -- bdev/blockdev.sh@752 -- # killprocess 61235 00:08:41.685 20:01:49 -- common/autotest_common.sh@936 -- # '[' -z 61235 ']' 00:08:41.685 20:01:49 -- common/autotest_common.sh@940 -- # kill -0 61235 00:08:41.685 20:01:49 -- common/autotest_common.sh@941 -- # uname 00:08:41.685 20:01:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.685 20:01:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61235 00:08:41.685 killing process with pid 61235 00:08:41.685 20:01:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.685 20:01:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.685 20:01:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61235' 00:08:41.685 20:01:49 -- common/autotest_common.sh@955 -- # kill 61235 00:08:41.685 20:01:49 -- common/autotest_common.sh@960 -- # wait 61235 00:08:43.062 20:01:50 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.062 20:01:50 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:43.062 20:01:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:43.062 20:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.062 20:01:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.062 ************************************ 00:08:43.062 START TEST bdev_hello_world 00:08:43.062 ************************************ 00:08:43.062 20:01:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:43.062 [2024-12-16 20:01:50.584062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.062 [2024-12-16 20:01:50.584170] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61889 ] 00:08:43.320 [2024-12-16 20:01:50.728732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.320 [2024-12-16 20:01:50.887433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.889 [2024-12-16 20:01:51.375730] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:43.889 [2024-12-16 20:01:51.375771] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:43.889 [2024-12-16 20:01:51.375787] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:43.889 [2024-12-16 20:01:51.377796] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:43.889 [2024-12-16 20:01:51.378320] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:43.889 [2024-12-16 20:01:51.378346] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:43.889 [2024-12-16 20:01:51.378557] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:43.889 00:08:43.889 [2024-12-16 20:01:51.378582] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:44.458 00:08:44.458 real 0m1.513s 00:08:44.458 user 0m1.218s 00:08:44.458 sys 0m0.189s 00:08:44.458 20:01:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.458 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 ************************************ 00:08:44.458 END TEST bdev_hello_world 00:08:44.458 ************************************ 00:08:44.458 20:01:52 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:44.458 20:01:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:44.458 20:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.458 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 ************************************ 00:08:44.458 START TEST bdev_bounds 00:08:44.458 ************************************ 00:08:44.458 20:01:52 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:44.458 20:01:52 -- bdev/blockdev.sh@288 -- # bdevio_pid=61930 00:08:44.458 20:01:52 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:44.458 Process bdevio pid: 61930 00:08:44.458 20:01:52 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61930' 00:08:44.458 20:01:52 -- bdev/blockdev.sh@291 -- # waitforlisten 61930 00:08:44.458 20:01:52 -- common/autotest_common.sh@829 -- # '[' -z 61930 ']' 00:08:44.458 20:01:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.458 20:01:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.458 20:01:52 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:44.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.458 20:01:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.458 20:01:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.458 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.719 [2024-12-16 20:01:52.136170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.719 [2024-12-16 20:01:52.136277] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61930 ] 00:08:44.719 [2024-12-16 20:01:52.282802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.980 [2024-12-16 20:01:52.473731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.980 [2024-12-16 20:01:52.473840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.980 [2024-12-16 20:01:52.473908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.358 20:01:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.358 20:01:53 -- common/autotest_common.sh@862 -- # return 0 00:08:46.358 20:01:53 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:46.358 I/O targets: 00:08:46.358 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:46.358 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:46.358 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:46.358 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.358 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.358 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.358 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:46.358 00:08:46.358 00:08:46.358 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.358 http://cunit.sourceforge.net/ 00:08:46.358 00:08:46.358 00:08:46.358 Suite: bdevio tests on: Nvme3n1 00:08:46.358 Test: blockdev write read block ...passed 00:08:46.358 Test: blockdev write zeroes read block ...passed 00:08:46.358 Test: blockdev write zeroes read no split ...passed 00:08:46.358 Test: blockdev write zeroes read split ...passed 00:08:46.358 Test: blockdev write zeroes read split partial ...passed 00:08:46.358 Test: blockdev reset ...[2024-12-16 20:01:53.771546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:46.358 [2024-12-16 20:01:53.773943] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.358 passed 00:08:46.358 Test: blockdev write read 8 blocks ...passed 00:08:46.358 Test: blockdev write read size > 128k ...passed 00:08:46.358 Test: blockdev write read invalid size ...passed 00:08:46.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.358 Test: blockdev write read max offset ...passed 00:08:46.358 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.358 Test: blockdev writev readv 8 blocks ...passed 00:08:46.358 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.358 Test: blockdev writev readv block ...passed 00:08:46.358 Test: blockdev writev readv size > 128k ...passed 00:08:46.358 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.358 Test: blockdev comparev and writev ...[2024-12-16 20:01:53.780869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27700a000 len:0x1000 00:08:46.358 [2024-12-16 20:01:53.780912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.358 passed 00:08:46.358 Test: blockdev nvme passthru rw ...passed 00:08:46.358 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:01:53.781629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.358 [2024-12-16 20:01:53.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.358 passed 00:08:46.358 Test: blockdev nvme admin passthru ...passed 00:08:46.358 Test: blockdev copy ...passed 00:08:46.358 Suite: bdevio tests on: Nvme2n3 00:08:46.358 Test: blockdev write read block ...passed 00:08:46.358 Test: blockdev write zeroes read block ...passed 00:08:46.358 Test: blockdev write zeroes read no split ...passed 00:08:46.358 Test: blockdev write zeroes read split ...passed 00:08:46.358 Test: blockdev write zeroes read split partial ...passed 00:08:46.358 Test: blockdev reset ...[2024-12-16 20:01:53.827017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:46.358 [2024-12-16 20:01:53.829401] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.358 passed 00:08:46.358 Test: blockdev write read 8 blocks ...passed 00:08:46.358 Test: blockdev write read size > 128k ...passed 00:08:46.358 Test: blockdev write read invalid size ...passed 00:08:46.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.358 Test: blockdev write read max offset ...passed 00:08:46.358 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.358 Test: blockdev writev readv 8 blocks ...passed 00:08:46.358 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.358 Test: blockdev writev readv block ...passed 00:08:46.358 Test: blockdev writev readv size > 128k ...passed 00:08:46.358 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.358 Test: blockdev comparev and writev ...[2024-12-16 20:01:53.835829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26cb04000 len:0x1000 00:08:46.358 [2024-12-16 20:01:53.835865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.358 passed 00:08:46.358 Test: blockdev nvme passthru rw ...passed 00:08:46.358 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:01:53.836594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.358 [2024-12-16 20:01:53.836616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.358 passed 00:08:46.358 Test: blockdev nvme admin passthru ...passed 00:08:46.358 Test: blockdev copy ...passed 00:08:46.358 Suite: bdevio tests on: Nvme2n2 00:08:46.358 Test: blockdev write read block ...passed 00:08:46.358 Test: blockdev write zeroes read block ...passed 00:08:46.358 Test: blockdev write zeroes read no split ...passed 00:08:46.358 Test: blockdev write zeroes read split ...passed 00:08:46.358 Test: blockdev write zeroes read split partial ...passed 00:08:46.359 Test: blockdev reset ...[2024-12-16 20:01:53.881899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:46.359 [2024-12-16 20:01:53.884493] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.359 passed 00:08:46.359 Test: blockdev write read 8 blocks ...passed 00:08:46.359 Test: blockdev write read size > 128k ...passed 00:08:46.359 Test: blockdev write read invalid size ...passed 00:08:46.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.359 Test: blockdev write read max offset ...passed 00:08:46.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.359 Test: blockdev writev readv 8 blocks ...passed 00:08:46.359 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.359 Test: blockdev writev readv block ...passed 00:08:46.359 Test: blockdev writev readv size > 128k ...passed 00:08:46.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.359 Test: blockdev comparev and writev ...[2024-12-16 20:01:53.891741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26cb04000 len:0x1000 00:08:46.359 [2024-12-16 20:01:53.891786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.359 passed 00:08:46.359 Test: blockdev nvme passthru rw ...passed 00:08:46.359 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:01:53.892574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.359 [2024-12-16 20:01:53.892601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.359 passed 00:08:46.359 Test: blockdev nvme admin passthru ...passed 00:08:46.359 Test: blockdev copy ...passed 00:08:46.359 Suite: bdevio tests on: Nvme2n1 00:08:46.359 Test: blockdev write read block ...passed 00:08:46.359 Test: blockdev write zeroes read block ...passed 00:08:46.359 Test: blockdev write zeroes read no split ...passed 00:08:46.359 Test: blockdev write zeroes read split ...passed 00:08:46.359 Test: blockdev write zeroes read split partial ...passed 00:08:46.359 Test: blockdev reset ...[2024-12-16 20:01:53.947804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:46.359 [2024-12-16 20:01:53.950585] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.359 passed 00:08:46.359 Test: blockdev write read 8 blocks ...passed 00:08:46.359 Test: blockdev write read size > 128k ...passed 00:08:46.359 Test: blockdev write read invalid size ...passed 00:08:46.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.359 Test: blockdev write read max offset ...passed 00:08:46.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.359 Test: blockdev writev readv 8 blocks ...passed 00:08:46.359 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.359 Test: blockdev writev readv block ...passed 00:08:46.359 Test: blockdev writev readv size > 128k ...passed 00:08:46.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.359 Test: blockdev comparev and writev ...[2024-12-16 20:01:53.957834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280c3c000 len:0x1000 00:08:46.359 [2024-12-16 20:01:53.957875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.359 passed 00:08:46.359 Test: blockdev nvme passthru rw ...passed 00:08:46.359 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:01:53.958529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.359 [2024-12-16 20:01:53.958555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.359 passed 00:08:46.359 Test: blockdev nvme admin passthru ...passed 00:08:46.359 Test: blockdev copy ...passed 00:08:46.359 Suite: bdevio tests on: Nvme1n1 00:08:46.359 Test: blockdev write read block ...passed 00:08:46.359 Test: blockdev write zeroes read block ...passed 00:08:46.359 Test: blockdev write zeroes read no split ...passed 00:08:46.359 Test: blockdev write zeroes read split ...passed 00:08:46.656 Test: blockdev write zeroes read split partial ...passed 00:08:46.656 Test: blockdev reset ...[2024-12-16 20:01:54.014100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:46.656 [2024-12-16 20:01:54.016523] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.656 passed 00:08:46.656 Test: blockdev write read 8 blocks ...passed 00:08:46.656 Test: blockdev write read size > 128k ...passed 00:08:46.656 Test: blockdev write read invalid size ...passed 00:08:46.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.656 Test: blockdev write read max offset ...passed 00:08:46.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.656 Test: blockdev writev readv 8 blocks ...passed 00:08:46.656 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.656 Test: blockdev writev readv block ...passed 00:08:46.656 Test: blockdev writev readv size > 128k ...passed 00:08:46.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.656 Test: blockdev comparev and writev ...[2024-12-16 20:01:54.023769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280c38000 len:0x1000 00:08:46.656 [2024-12-16 20:01:54.023806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.656 passed 00:08:46.656 Test: blockdev nvme passthru rw ...passed 00:08:46.656 Test: blockdev nvme passthru vendor specific ...[2024-12-16 20:01:54.024511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.656 [2024-12-16 20:01:54.024535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.656 passed 00:08:46.656 Test: blockdev nvme admin passthru ...passed 00:08:46.656 Test: blockdev copy ...passed 00:08:46.656 Suite: bdevio tests on: Nvme0n1p2 00:08:46.656 Test: blockdev write read block ...passed 00:08:46.656 Test: blockdev write zeroes read block ...passed 00:08:46.656 Test: blockdev write zeroes read no split ...passed 00:08:46.656 Test: blockdev write zeroes read split ...passed 00:08:46.656 Test: blockdev write zeroes read split partial ...passed 00:08:46.656 Test: blockdev reset ...[2024-12-16 20:01:54.084527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:46.656 [2024-12-16 20:01:54.086925] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.656 passed 00:08:46.656 Test: blockdev write read 8 blocks ...passed 00:08:46.656 Test: blockdev write read size > 128k ...passed 00:08:46.656 Test: blockdev write read invalid size ...passed 00:08:46.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.656 Test: blockdev write read max offset ...passed 00:08:46.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.656 Test: blockdev writev readv 8 blocks ...passed 00:08:46.656 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.656 Test: blockdev writev readv block ...passed 00:08:46.656 Test: blockdev writev readv size > 128k ...passed 00:08:46.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.656 Test: blockdev comparev and writev ...[2024-12-16 20:01:54.093523] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:46.656 separate metadata which is not supported yet. 00:08:46.656 passed 00:08:46.656 Test: blockdev nvme passthru rw ...passed 00:08:46.656 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.656 Test: blockdev nvme admin passthru ...passed 00:08:46.656 Test: blockdev copy ...passed 00:08:46.656 Suite: bdevio tests on: Nvme0n1p1 00:08:46.656 Test: blockdev write read block ...passed 00:08:46.656 Test: blockdev write zeroes read block ...passed 00:08:46.656 Test: blockdev write zeroes read no split ...passed 00:08:46.656 Test: blockdev write zeroes read split ...passed 00:08:46.656 Test: blockdev write zeroes read split partial ...passed 00:08:46.656 Test: blockdev reset ...[2024-12-16 20:01:54.138235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:46.656 [2024-12-16 20:01:54.140585] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.656 passed 00:08:46.656 Test: blockdev write read 8 blocks ...passed 00:08:46.656 Test: blockdev write read size > 128k ...passed 00:08:46.656 Test: blockdev write read invalid size ...passed 00:08:46.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.656 Test: blockdev write read max offset ...passed 00:08:46.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.656 Test: blockdev writev readv 8 blocks ...passed 00:08:46.656 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.656 Test: blockdev writev readv block ...passed 00:08:46.656 Test: blockdev writev readv size > 128k ...passed 00:08:46.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.656 Test: blockdev comparev and writev ...[2024-12-16 20:01:54.146925] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:46.656 separate metadata which is not supported yet. 00:08:46.656 passed 00:08:46.656 Test: blockdev nvme passthru rw ...passed 00:08:46.656 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.656 Test: blockdev nvme admin passthru ...passed 00:08:46.656 Test: blockdev copy ...passed 00:08:46.656 00:08:46.656 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.656 suites 7 7 n/a 0 0 00:08:46.656 tests 161 161 161 0 0 00:08:46.656 asserts 1006 1006 1006 0 n/a 00:08:46.656 00:08:46.656 Elapsed time = 1.150 seconds 00:08:46.656 0 00:08:46.656 20:01:54 -- bdev/blockdev.sh@293 -- # killprocess 61930 00:08:46.656 20:01:54 -- common/autotest_common.sh@936 -- # '[' -z 61930 ']' 00:08:46.656 20:01:54 -- common/autotest_common.sh@940 -- # kill -0 61930 00:08:46.656 20:01:54 -- common/autotest_common.sh@941 -- # uname 00:08:46.656 20:01:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.656 20:01:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61930 00:08:46.656 20:01:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:46.656 20:01:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:46.656 20:01:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61930' 00:08:46.656 killing process with pid 61930 00:08:46.656 20:01:54 -- common/autotest_common.sh@955 -- # kill 61930 00:08:46.656 20:01:54 -- common/autotest_common.sh@960 -- # wait 61930 00:08:47.223 20:01:54 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:47.223 00:08:47.223 real 0m2.630s 00:08:47.223 user 0m6.824s 00:08:47.223 sys 0m0.312s 00:08:47.223 20:01:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.223 20:01:54 -- common/autotest_common.sh@10 -- # set +x 00:08:47.223 ************************************ 00:08:47.223 END TEST bdev_bounds 00:08:47.223 ************************************ 00:08:47.223 20:01:54 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:47.223 20:01:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:47.223 20:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.223 20:01:54 -- common/autotest_common.sh@10 -- # set +x 00:08:47.223 ************************************ 00:08:47.223 START TEST bdev_nbd 00:08:47.223 ************************************ 00:08:47.223 20:01:54 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:47.223 20:01:54 -- bdev/blockdev.sh@298 -- # uname -s 00:08:47.223 20:01:54 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:47.223 20:01:54 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.223 20:01:54 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.223 20:01:54 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.223 20:01:54 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:47.223 20:01:54 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:47.223 20:01:54 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:47.223 20:01:54 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:47.223 20:01:54 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:47.223 20:01:54 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:47.223 20:01:54 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.223 20:01:54 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:47.223 20:01:54 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.223 20:01:54 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:47.223 20:01:54 -- bdev/blockdev.sh@316 -- # nbd_pid=61987 00:08:47.223 20:01:54 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:47.223 20:01:54 -- bdev/blockdev.sh@318 -- # waitforlisten 61987 /var/tmp/spdk-nbd.sock 00:08:47.223 20:01:54 -- common/autotest_common.sh@829 -- # '[' -z 61987 ']' 00:08:47.223 20:01:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:47.223 20:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:47.224 20:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:47.224 20:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.224 20:01:54 -- common/autotest_common.sh@10 -- # set +x 00:08:47.224 20:01:54 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:47.224 [2024-12-16 20:01:54.819267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.224 [2024-12-16 20:01:54.819390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.483 [2024-12-16 20:01:54.966766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.740 [2024-12-16 20:01:55.129770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.729 20:01:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.729 20:01:56 -- common/autotest_common.sh@862 -- # return 0 00:08:48.729 20:01:56 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@24 -- # local i 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.729 20:01:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:48.987 20:01:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:48.987 20:01:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:48.987 20:01:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:48.987 20:01:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:48.987 20:01:56 -- common/autotest_common.sh@867 -- # local i 00:08:48.987 20:01:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.987 20:01:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.987 20:01:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:48.987 20:01:56 -- common/autotest_common.sh@871 -- # break 00:08:48.987 20:01:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.987 20:01:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.988 20:01:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.988 1+0 records in 00:08:48.988 1+0 records out 00:08:48.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480967 s, 8.5 MB/s 00:08:48.988 20:01:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.988 20:01:56 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.988 20:01:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.988 20:01:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.988 20:01:56 -- common/autotest_common.sh@887 -- # return 0 00:08:48.988 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.988 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.988 20:01:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:49.246 20:01:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:49.246 20:01:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:49.246 20:01:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:49.246 20:01:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:49.246 20:01:56 -- common/autotest_common.sh@867 -- # local i 00:08:49.246 20:01:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.246 20:01:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.246 20:01:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:49.246 20:01:56 -- common/autotest_common.sh@871 -- # break 00:08:49.246 20:01:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.246 20:01:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.246 20:01:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.246 1+0 records in 00:08:49.246 1+0 records out 00:08:49.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439482 s, 9.3 MB/s 00:08:49.246 20:01:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.246 20:01:56 -- common/autotest_common.sh@884 -- # size=4096 00:08:49.246 20:01:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.246 20:01:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.246 20:01:56 -- common/autotest_common.sh@887 -- # return 0 00:08:49.246 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.246 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.246 20:01:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:49.504 20:01:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:49.504 20:01:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:49.504 20:01:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:49.504 20:01:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:49.504 20:01:56 -- common/autotest_common.sh@867 -- # local i 00:08:49.504 20:01:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.504 20:01:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.504 20:01:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:49.504 20:01:56 -- common/autotest_common.sh@871 -- # break 00:08:49.504 20:01:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.504 20:01:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.504 20:01:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.504 1+0 records in 00:08:49.504 1+0 records out 00:08:49.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491307 s, 8.3 MB/s 00:08:49.504 20:01:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.504 20:01:56 -- common/autotest_common.sh@884 -- # size=4096 00:08:49.504 20:01:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.504 20:01:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.504 20:01:56 -- common/autotest_common.sh@887 -- # return 0 00:08:49.504 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.504 20:01:56 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.504 20:01:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:49.763 20:01:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:49.763 20:01:57 -- common/autotest_common.sh@867 -- # local i 00:08:49.763 20:01:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:49.763 20:01:57 -- common/autotest_common.sh@871 -- # break 00:08:49.763 20:01:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.763 1+0 records in 00:08:49.763 1+0 records out 00:08:49.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554736 s, 7.4 MB/s 00:08:49.763 20:01:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.763 20:01:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:49.763 20:01:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.763 20:01:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.763 20:01:57 -- common/autotest_common.sh@887 -- # return 0 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:49.763 20:01:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:49.763 20:01:57 -- common/autotest_common.sh@867 -- # local i 00:08:49.763 20:01:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:49.763 20:01:57 -- common/autotest_common.sh@871 -- # break 00:08:49.763 20:01:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.763 20:01:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.763 1+0 records in 00:08:49.763 1+0 records out 00:08:49.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048151 s, 8.5 MB/s 00:08:49.763 20:01:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.763 20:01:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:49.763 20:01:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.763 20:01:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.763 20:01:57 -- common/autotest_common.sh@887 -- # return 0 00:08:49.763 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.764 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.764 20:01:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:50.022 20:01:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:50.022 20:01:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:50.022 20:01:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:50.022 20:01:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:50.022 20:01:57 -- common/autotest_common.sh@867 -- # local i 00:08:50.022 20:01:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.022 20:01:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.022 20:01:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:50.022 20:01:57 -- common/autotest_common.sh@871 -- # break 00:08:50.022 20:01:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.022 20:01:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.022 20:01:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.022 1+0 records in 00:08:50.022 1+0 records out 00:08:50.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405178 s, 10.1 MB/s 00:08:50.022 20:01:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.022 20:01:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.022 20:01:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.022 20:01:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.022 20:01:57 -- common/autotest_common.sh@887 -- # return 0 00:08:50.022 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.022 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:50.022 20:01:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:50.281 20:01:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:50.281 20:01:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:50.281 20:01:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:50.281 20:01:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:50.281 20:01:57 -- common/autotest_common.sh@867 -- # local i 00:08:50.281 20:01:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.281 20:01:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.281 20:01:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:50.281 20:01:57 -- common/autotest_common.sh@871 -- # break 00:08:50.281 20:01:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.281 20:01:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.281 20:01:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.281 1+0 records in 00:08:50.281 1+0 records out 00:08:50.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504795 s, 8.1 MB/s 00:08:50.281 20:01:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.281 20:01:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.281 20:01:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.281 20:01:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.281 20:01:57 -- common/autotest_common.sh@887 -- # return 0 00:08:50.281 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.281 20:01:57 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:50.281 20:01:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.539 20:01:58 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd0", 00:08:50.539 "bdev_name": "Nvme0n1p1" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd1", 00:08:50.539 "bdev_name": "Nvme0n1p2" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd2", 00:08:50.539 "bdev_name": "Nvme1n1" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd3", 00:08:50.539 "bdev_name": "Nvme2n1" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd4", 00:08:50.539 "bdev_name": "Nvme2n2" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd5", 00:08:50.539 "bdev_name": "Nvme2n3" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd6", 00:08:50.539 "bdev_name": "Nvme3n1" 00:08:50.539 } 00:08:50.539 ]' 00:08:50.539 20:01:58 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:50.539 20:01:58 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd0", 00:08:50.539 "bdev_name": "Nvme0n1p1" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd1", 00:08:50.539 "bdev_name": "Nvme0n1p2" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd2", 00:08:50.539 "bdev_name": "Nvme1n1" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd3", 00:08:50.539 "bdev_name": "Nvme2n1" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd4", 00:08:50.539 "bdev_name": "Nvme2n2" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd5", 00:08:50.539 "bdev_name": "Nvme2n3" 00:08:50.539 }, 00:08:50.539 { 00:08:50.539 "nbd_device": "/dev/nbd6", 00:08:50.540 "bdev_name": "Nvme3n1" 00:08:50.540 } 00:08:50.540 ]' 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@51 -- # local i 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.540 20:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@41 -- # break 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@41 -- # break 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.798 20:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@41 -- # break 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.056 20:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@41 -- # break 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.314 20:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@41 -- # break 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.572 20:01:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@41 -- # break 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@41 -- # break 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.831 20:01:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@65 -- # true 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@65 -- # count=0 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@122 -- # count=0 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@127 -- # return 0 00:08:52.089 20:01:59 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@12 -- # local i 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.089 20:01:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:52.348 /dev/nbd0 00:08:52.348 20:01:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:52.348 20:01:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:52.348 20:01:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:52.348 20:01:59 -- common/autotest_common.sh@867 -- # local i 00:08:52.348 20:01:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.348 20:01:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.348 20:01:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:52.348 20:01:59 -- common/autotest_common.sh@871 -- # break 00:08:52.348 20:01:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.348 20:01:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.348 20:01:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.348 1+0 records in 00:08:52.348 1+0 records out 00:08:52.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509479 s, 8.0 MB/s 00:08:52.348 20:01:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.348 20:01:59 -- common/autotest_common.sh@884 -- # size=4096 00:08:52.348 20:01:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.348 20:01:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.348 20:01:59 -- common/autotest_common.sh@887 -- # return 0 00:08:52.348 20:01:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.348 20:01:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.348 20:01:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:52.606 /dev/nbd1 00:08:52.606 20:02:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:52.606 20:02:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:52.606 20:02:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:52.606 20:02:00 -- common/autotest_common.sh@867 -- # local i 00:08:52.606 20:02:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.606 20:02:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.606 20:02:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:52.606 20:02:00 -- common/autotest_common.sh@871 -- # break 00:08:52.606 20:02:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.606 20:02:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.606 20:02:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.606 1+0 records in 00:08:52.606 1+0 records out 00:08:52.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346717 s, 11.8 MB/s 00:08:52.606 20:02:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.606 20:02:00 -- common/autotest_common.sh@884 -- # size=4096 00:08:52.606 20:02:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.606 20:02:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.606 20:02:00 -- common/autotest_common.sh@887 -- # return 0 00:08:52.606 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.606 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.606 20:02:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:52.864 /dev/nbd10 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:52.864 20:02:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:52.864 20:02:00 -- common/autotest_common.sh@867 -- # local i 00:08:52.864 20:02:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:52.864 20:02:00 -- common/autotest_common.sh@871 -- # break 00:08:52.864 20:02:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.864 1+0 records in 00:08:52.864 1+0 records out 00:08:52.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449758 s, 9.1 MB/s 00:08:52.864 20:02:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.864 20:02:00 -- common/autotest_common.sh@884 -- # size=4096 00:08:52.864 20:02:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.864 20:02:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.864 20:02:00 -- common/autotest_common.sh@887 -- # return 0 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:52.864 /dev/nbd11 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:52.864 20:02:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:52.864 20:02:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:52.864 20:02:00 -- common/autotest_common.sh@867 -- # local i 00:08:52.864 20:02:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:52.864 20:02:00 -- common/autotest_common.sh@871 -- # break 00:08:52.864 20:02:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.864 20:02:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.864 1+0 records in 00:08:52.864 1+0 records out 00:08:52.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285756 s, 14.3 MB/s 00:08:52.864 20:02:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.864 20:02:00 -- common/autotest_common.sh@884 -- # size=4096 00:08:52.864 20:02:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.864 20:02:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.864 20:02:00 -- common/autotest_common.sh@887 -- # return 0 00:08:52.865 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.865 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.865 20:02:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:53.123 /dev/nbd12 00:08:53.123 20:02:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:53.123 20:02:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:53.124 20:02:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:53.124 20:02:00 -- common/autotest_common.sh@867 -- # local i 00:08:53.124 20:02:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:53.124 20:02:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:53.124 20:02:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:53.124 20:02:00 -- common/autotest_common.sh@871 -- # break 00:08:53.124 20:02:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:53.124 20:02:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:53.124 20:02:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.124 1+0 records in 00:08:53.124 1+0 records out 00:08:53.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405724 s, 10.1 MB/s 00:08:53.124 20:02:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.124 20:02:00 -- common/autotest_common.sh@884 -- # size=4096 00:08:53.124 20:02:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.124 20:02:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:53.124 20:02:00 -- common/autotest_common.sh@887 -- # return 0 00:08:53.124 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.124 20:02:00 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.124 20:02:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:53.384 /dev/nbd13 00:08:53.384 20:02:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:53.384 20:02:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:53.384 20:02:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:53.384 20:02:01 -- common/autotest_common.sh@867 -- # local i 00:08:53.384 20:02:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:53.384 20:02:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:53.384 20:02:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:53.384 20:02:01 -- common/autotest_common.sh@871 -- # break 00:08:53.384 20:02:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:53.384 20:02:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:53.384 20:02:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.384 1+0 records in 00:08:53.384 1+0 records out 00:08:53.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637827 s, 6.4 MB/s 00:08:53.384 20:02:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.384 20:02:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:53.384 20:02:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.384 20:02:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:53.384 20:02:01 -- common/autotest_common.sh@887 -- # return 0 00:08:53.384 20:02:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.384 20:02:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.384 20:02:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:53.646 /dev/nbd14 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:53.646 20:02:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:53.646 20:02:01 -- common/autotest_common.sh@867 -- # local i 00:08:53.646 20:02:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:53.646 20:02:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:53.646 20:02:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:53.646 20:02:01 -- common/autotest_common.sh@871 -- # break 00:08:53.646 20:02:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:53.646 20:02:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:53.646 20:02:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.646 1+0 records in 00:08:53.646 1+0 records out 00:08:53.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417411 s, 9.8 MB/s 00:08:53.646 20:02:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.646 20:02:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:53.646 20:02:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.646 20:02:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:53.646 20:02:01 -- common/autotest_common.sh@887 -- # return 0 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.646 20:02:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd0", 00:08:53.908 "bdev_name": "Nvme0n1p1" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd1", 00:08:53.908 "bdev_name": "Nvme0n1p2" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd10", 00:08:53.908 "bdev_name": "Nvme1n1" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd11", 00:08:53.908 "bdev_name": "Nvme2n1" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd12", 00:08:53.908 "bdev_name": "Nvme2n2" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd13", 00:08:53.908 "bdev_name": "Nvme2n3" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd14", 00:08:53.908 "bdev_name": "Nvme3n1" 00:08:53.908 } 00:08:53.908 ]' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd0", 00:08:53.908 "bdev_name": "Nvme0n1p1" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd1", 00:08:53.908 "bdev_name": "Nvme0n1p2" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd10", 00:08:53.908 "bdev_name": "Nvme1n1" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd11", 00:08:53.908 "bdev_name": "Nvme2n1" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd12", 00:08:53.908 "bdev_name": "Nvme2n2" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd13", 00:08:53.908 "bdev_name": "Nvme2n3" 00:08:53.908 }, 00:08:53.908 { 00:08:53.908 "nbd_device": "/dev/nbd14", 00:08:53.908 "bdev_name": "Nvme3n1" 00:08:53.908 } 00:08:53.908 ]' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:53.908 /dev/nbd1 00:08:53.908 /dev/nbd10 00:08:53.908 /dev/nbd11 00:08:53.908 /dev/nbd12 00:08:53.908 /dev/nbd13 00:08:53.908 /dev/nbd14' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:53.908 /dev/nbd1 00:08:53.908 /dev/nbd10 00:08:53.908 /dev/nbd11 00:08:53.908 /dev/nbd12 00:08:53.908 /dev/nbd13 00:08:53.908 /dev/nbd14' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@65 -- # count=7 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@95 -- # count=7 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:53.908 256+0 records in 00:08:53.908 256+0 records out 00:08:53.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640795 s, 164 MB/s 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.908 20:02:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:54.168 256+0 records in 00:08:54.169 256+0 records out 00:08:54.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105478 s, 9.9 MB/s 00:08:54.169 20:02:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.169 20:02:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:54.169 256+0 records in 00:08:54.169 256+0 records out 00:08:54.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0832924 s, 12.6 MB/s 00:08:54.169 20:02:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.169 20:02:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:54.169 256+0 records in 00:08:54.169 256+0 records out 00:08:54.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0814893 s, 12.9 MB/s 00:08:54.169 20:02:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.169 20:02:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:54.429 256+0 records in 00:08:54.429 256+0 records out 00:08:54.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0791687 s, 13.2 MB/s 00:08:54.429 20:02:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.429 20:02:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:54.429 256+0 records in 00:08:54.429 256+0 records out 00:08:54.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0799178 s, 13.1 MB/s 00:08:54.429 20:02:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.429 20:02:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:54.429 256+0 records in 00:08:54.429 256+0 records out 00:08:54.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114234 s, 9.2 MB/s 00:08:54.429 20:02:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.429 20:02:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:54.690 256+0 records in 00:08:54.690 256+0 records out 00:08:54.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0801475 s, 13.1 MB/s 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@51 -- # local i 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.690 20:02:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@41 -- # break 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@41 -- # break 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.951 20:02:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@41 -- # break 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.212 20:02:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@41 -- # break 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.470 20:02:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@41 -- # break 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@41 -- # break 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.728 20:02:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@41 -- # break 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.987 20:02:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@65 -- # true 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@65 -- # count=0 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@104 -- # count=0 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@109 -- # return 0 00:08:56.245 20:02:03 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:56.245 20:02:03 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:56.503 malloc_lvol_verify 00:08:56.503 20:02:03 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:56.761 335cdded-69e4-4f6e-8744-3e553035a298 00:08:56.762 20:02:04 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:56.762 f0abcd07-5ed4-4346-8856-4c2e75c89961 00:08:56.762 20:02:04 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:57.020 /dev/nbd0 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:57.020 mke2fs 1.47.0 (5-Feb-2023) 00:08:57.020 Discarding device blocks: 0/4096 done 00:08:57.020 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:57.020 00:08:57.020 Allocating group tables: 0/1 done 00:08:57.020 Writing inode tables: 0/1 done 00:08:57.020 Creating journal (1024 blocks): done 00:08:57.020 Writing superblocks and filesystem accounting information: 0/1 done 00:08:57.020 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@51 -- # local i 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.020 20:02:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@41 -- # break 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:57.279 20:02:04 -- bdev/nbd_common.sh@147 -- # return 0 00:08:57.279 20:02:04 -- bdev/blockdev.sh@324 -- # killprocess 61987 00:08:57.279 20:02:04 -- common/autotest_common.sh@936 -- # '[' -z 61987 ']' 00:08:57.279 20:02:04 -- common/autotest_common.sh@940 -- # kill -0 61987 00:08:57.279 20:02:04 -- common/autotest_common.sh@941 -- # uname 00:08:57.279 20:02:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:57.279 20:02:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61987 00:08:57.279 20:02:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:57.279 20:02:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:57.279 killing process with pid 61987 00:08:57.279 20:02:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61987' 00:08:57.279 20:02:04 -- common/autotest_common.sh@955 -- # kill 61987 00:08:57.279 20:02:04 -- common/autotest_common.sh@960 -- # wait 61987 00:08:58.214 20:02:05 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:58.214 00:08:58.214 real 0m10.744s 00:08:58.214 user 0m15.032s 00:08:58.214 sys 0m3.429s 00:08:58.214 20:02:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.214 ************************************ 00:08:58.214 END TEST bdev_nbd 00:08:58.214 ************************************ 00:08:58.214 20:02:05 -- common/autotest_common.sh@10 -- # set +x 00:08:58.214 20:02:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:58.214 20:02:05 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:58.214 20:02:05 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:58.214 skipping fio tests on NVMe due to multi-ns failures. 00:08:58.214 20:02:05 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:58.214 20:02:05 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:58.214 20:02:05 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:58.214 20:02:05 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:58.214 20:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.214 20:02:05 -- common/autotest_common.sh@10 -- # set +x 00:08:58.214 ************************************ 00:08:58.214 START TEST bdev_verify 00:08:58.214 ************************************ 00:08:58.214 20:02:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:58.214 [2024-12-16 20:02:05.598096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.214 [2024-12-16 20:02:05.598223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62404 ] 00:08:58.214 [2024-12-16 20:02:05.745347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.473 [2024-12-16 20:02:05.906699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.473 [2024-12-16 20:02:05.906771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.041 Running I/O for 5 seconds... 00:09:04.327 00:09:04.327 Latency(us) 00:09:04.327 [2024-12-16T20:02:11.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0x5e800 00:09:04.327 Nvme0n1p1 : 5.04 2725.87 10.65 0.00 0.00 46809.52 10435.35 51017.26 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x5e800 length 0x5e800 00:09:04.327 Nvme0n1p1 : 5.04 2812.73 10.99 0.00 0.00 45382.71 6301.54 54848.59 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0x5e7ff 00:09:04.327 Nvme0n1p2 : 5.05 2728.83 10.66 0.00 0.00 46756.68 3982.57 49202.41 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:04.327 Nvme0n1p2 : 5.04 2818.20 11.01 0.00 0.00 45254.14 3402.83 49807.36 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0xa0000 00:09:04.327 Nvme1n1 : 5.05 2727.13 10.65 0.00 0.00 46731.62 6503.19 47185.92 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0xa0000 length 0xa0000 00:09:04.327 Nvme1n1 : 5.04 2817.05 11.00 0.00 0.00 45216.95 4990.82 45169.43 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0x80000 00:09:04.327 Nvme2n1 : 5.05 2725.43 10.65 0.00 0.00 46660.58 8872.57 48194.17 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x80000 length 0x80000 00:09:04.327 Nvme2n1 : 5.04 2815.90 11.00 0.00 0.00 45185.83 6452.78 44362.83 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0x80000 00:09:04.327 Nvme2n2 : 5.06 2723.95 10.64 0.00 0.00 46645.01 10737.82 49202.41 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x80000 length 0x80000 00:09:04.327 Nvme2n2 : 5.05 2822.43 11.03 0.00 0.00 45058.95 1903.06 41338.09 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0x80000 00:09:04.327 Nvme2n3 : 5.06 2723.34 10.64 0.00 0.00 46610.15 11141.12 49404.06 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x80000 length 0x80000 00:09:04.327 Nvme2n3 : 5.05 2821.36 11.02 0.00 0.00 45019.91 3087.75 41136.44 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x0 length 0x20000 00:09:04.327 Nvme3n1 : 5.06 2730.72 10.67 0.00 0.00 46486.56 2457.60 49605.71 00:09:04.327 [2024-12-16T20:02:11.967Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.327 Verification LBA range: start 0x20000 length 0x20000 00:09:04.327 Nvme3n1 : 5.05 2819.37 11.01 0.00 0.00 44989.39 5822.62 40934.79 00:09:04.327 [2024-12-16T20:02:11.967Z] =================================================================================================================== 00:09:04.327 [2024-12-16T20:02:11.967Z] Total : 38812.31 151.61 0.00 0.00 45902.80 1903.06 54848.59 00:09:07.626 00:09:07.626 real 0m9.366s 00:09:07.626 user 0m17.047s 00:09:07.626 sys 0m0.261s 00:09:07.626 20:02:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.626 20:02:14 -- common/autotest_common.sh@10 -- # set +x 00:09:07.626 ************************************ 00:09:07.626 END TEST bdev_verify 00:09:07.626 ************************************ 00:09:07.626 20:02:14 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:07.626 20:02:14 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:07.626 20:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.626 20:02:14 -- common/autotest_common.sh@10 -- # set +x 00:09:07.626 ************************************ 00:09:07.626 START TEST bdev_verify_big_io 00:09:07.626 ************************************ 00:09:07.626 20:02:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:07.626 [2024-12-16 20:02:15.030725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:07.627 [2024-12-16 20:02:15.030822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62524 ] 00:09:07.627 [2024-12-16 20:02:15.176090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.886 [2024-12-16 20:02:15.402413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.886 [2024-12-16 20:02:15.402458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.458 Running I/O for 5 seconds... 00:09:15.029 00:09:15.029 Latency(us) 00:09:15.029 [2024-12-16T20:02:22.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x0 length 0x5e80 00:09:15.029 Nvme0n1p1 : 5.35 234.06 14.63 0.00 0.00 529372.52 112923.57 816276.09 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x5e80 length 0x5e80 00:09:15.029 Nvme0n1p1 : 5.35 252.06 15.75 0.00 0.00 496222.58 78239.90 748521.94 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x0 length 0x5e7f 00:09:15.029 Nvme0n1p2 : 5.40 240.02 15.00 0.00 0.00 513547.89 47992.52 748521.94 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:15.029 Nvme0n1p2 : 5.40 257.48 16.09 0.00 0.00 481761.18 47387.57 693673.35 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x0 length 0xa000 00:09:15.029 Nvme1n1 : 5.43 248.26 15.52 0.00 0.00 493993.01 28230.89 690446.97 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0xa000 length 0xa000 00:09:15.029 Nvme1n1 : 5.40 257.38 16.09 0.00 0.00 475302.89 48395.82 642051.15 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x0 length 0x8000 00:09:15.029 Nvme2n1 : 5.44 248.18 15.51 0.00 0.00 486850.23 28835.84 629145.60 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x8000 length 0x8000 00:09:15.029 Nvme2n1 : 5.43 263.30 16.46 0.00 0.00 460087.47 24601.21 587202.56 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x0 length 0x8000 00:09:15.029 Nvme2n2 : 5.45 255.32 15.96 0.00 0.00 467870.72 10838.65 709805.29 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x8000 length 0x8000 00:09:15.029 Nvme2n2 : 5.43 263.24 16.45 0.00 0.00 453782.26 24500.38 571070.62 00:09:15.029 [2024-12-16T20:02:22.669Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.029 Verification LBA range: start 0x0 length 0x8000 00:09:15.029 Nvme2n3 : 5.46 262.02 16.38 0.00 0.00 449702.50 7662.67 645277.54 00:09:15.029 [2024-12-16T20:02:22.670Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.030 Verification LBA range: start 0x8000 length 0x8000 00:09:15.030 Nvme2n3 : 5.44 271.50 16.97 0.00 0.00 435625.12 8620.50 474278.99 00:09:15.030 [2024-12-16T20:02:22.670Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.030 Verification LBA range: start 0x0 length 0x2000 00:09:15.030 Nvme3n1 : 5.48 286.19 17.89 0.00 0.00 406159.38 1380.04 877577.45 00:09:15.030 [2024-12-16T20:02:22.670Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.030 Verification LBA range: start 0x2000 length 0x2000 00:09:15.030 Nvme3n1 : 5.45 287.42 17.96 0.00 0.00 406594.36 3428.04 580749.78 00:09:15.030 [2024-12-16T20:02:22.670Z] =================================================================================================================== 00:09:15.030 [2024-12-16T20:02:22.670Z] Total : 3626.44 226.65 0.00 0.00 466214.35 1380.04 877577.45 00:09:15.595 00:09:15.595 real 0m8.157s 00:09:15.595 user 0m15.115s 00:09:15.595 sys 0m0.234s 00:09:15.595 20:02:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.595 20:02:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.595 ************************************ 00:09:15.595 END TEST bdev_verify_big_io 00:09:15.595 ************************************ 00:09:15.595 20:02:23 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.595 20:02:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:15.595 20:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.595 20:02:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.595 ************************************ 00:09:15.595 START TEST bdev_write_zeroes 00:09:15.595 ************************************ 00:09:15.595 20:02:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.853 [2024-12-16 20:02:23.238813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.853 [2024-12-16 20:02:23.238922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62633 ] 00:09:15.853 [2024-12-16 20:02:23.381624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.111 [2024-12-16 20:02:23.519501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.369 Running I/O for 1 seconds... 00:09:17.744 00:09:17.744 Latency(us) 00:09:17.744 [2024-12-16T20:02:25.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme0n1p1 : 1.02 9240.13 36.09 0.00 0.00 13818.85 6049.48 24702.03 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme0n1p2 : 1.02 9226.47 36.04 0.00 0.00 13819.61 6276.33 25710.28 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme1n1 : 1.02 9216.14 36.00 0.00 0.00 13808.14 9326.28 21576.47 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme2n1 : 1.02 9205.84 35.96 0.00 0.00 13791.69 9275.86 21475.64 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme2n2 : 1.02 9195.51 35.92 0.00 0.00 13756.18 7713.08 21778.12 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme2n3 : 1.02 9185.22 35.88 0.00 0.00 13753.82 8318.03 21072.34 00:09:17.744 [2024-12-16T20:02:25.384Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:17.744 Nvme3n1 : 1.03 9112.56 35.60 0.00 0.00 13846.75 9527.93 21072.34 00:09:17.744 [2024-12-16T20:02:25.384Z] =================================================================================================================== 00:09:17.744 [2024-12-16T20:02:25.384Z] Total : 64381.87 251.49 0.00 0.00 13799.25 6049.48 25710.28 00:09:18.316 00:09:18.316 real 0m2.769s 00:09:18.316 user 0m2.481s 00:09:18.316 sys 0m0.171s 00:09:18.316 20:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.316 ************************************ 00:09:18.316 END TEST bdev_write_zeroes 00:09:18.316 ************************************ 00:09:18.316 20:02:25 -- common/autotest_common.sh@10 -- # set +x 00:09:18.576 20:02:26 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:18.576 20:02:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:18.577 20:02:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.577 20:02:26 -- common/autotest_common.sh@10 -- # set +x 00:09:18.577 ************************************ 00:09:18.577 START TEST bdev_json_nonenclosed 00:09:18.577 ************************************ 00:09:18.577 20:02:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:18.577 [2024-12-16 20:02:26.084589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:18.577 [2024-12-16 20:02:26.084732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62686 ] 00:09:18.837 [2024-12-16 20:02:26.229273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.838 [2024-12-16 20:02:26.454806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.838 [2024-12-16 20:02:26.455000] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:18.838 [2024-12-16 20:02:26.455019] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.417 00:09:19.417 real 0m0.748s 00:09:19.417 user 0m0.527s 00:09:19.417 sys 0m0.115s 00:09:19.417 20:02:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.417 20:02:26 -- common/autotest_common.sh@10 -- # set +x 00:09:19.417 ************************************ 00:09:19.417 END TEST bdev_json_nonenclosed 00:09:19.417 ************************************ 00:09:19.417 20:02:26 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:19.417 20:02:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:19.417 20:02:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.417 20:02:26 -- common/autotest_common.sh@10 -- # set +x 00:09:19.417 ************************************ 00:09:19.417 START TEST bdev_json_nonarray 00:09:19.417 ************************************ 00:09:19.417 20:02:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:19.417 [2024-12-16 20:02:26.891015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:19.417 [2024-12-16 20:02:26.891159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62712 ] 00:09:19.417 [2024-12-16 20:02:27.047169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.707 [2024-12-16 20:02:27.269614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.707 [2024-12-16 20:02:27.269825] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:19.707 [2024-12-16 20:02:27.269845] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.966 00:09:19.966 real 0m0.737s 00:09:19.966 user 0m0.508s 00:09:19.966 sys 0m0.123s 00:09:19.966 20:02:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.966 ************************************ 00:09:19.966 END TEST bdev_json_nonarray 00:09:19.966 ************************************ 00:09:19.966 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:19.966 20:02:27 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:19.966 20:02:27 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:19.966 20:02:27 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:19.966 20:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.224 20:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.224 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.224 ************************************ 00:09:20.224 START TEST bdev_gpt_uuid 00:09:20.224 ************************************ 00:09:20.224 20:02:27 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:20.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.224 20:02:27 -- bdev/blockdev.sh@612 -- # local bdev 00:09:20.224 20:02:27 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:20.224 20:02:27 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62743 00:09:20.224 20:02:27 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:20.224 20:02:27 -- bdev/blockdev.sh@47 -- # waitforlisten 62743 00:09:20.224 20:02:27 -- common/autotest_common.sh@829 -- # '[' -z 62743 ']' 00:09:20.224 20:02:27 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:20.224 20:02:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.224 20:02:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.224 20:02:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.224 20:02:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.224 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.224 [2024-12-16 20:02:27.686461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.224 [2024-12-16 20:02:27.686574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62743 ] 00:09:20.224 [2024-12-16 20:02:27.830277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.483 [2024-12-16 20:02:28.006146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.483 [2024-12-16 20:02:28.006371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.857 20:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.857 20:02:29 -- common/autotest_common.sh@862 -- # return 0 00:09:21.857 20:02:29 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:21.857 20:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.857 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:21.857 Some configs were skipped because the RPC state that can call them passed over. 00:09:21.857 20:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.857 20:02:29 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:21.857 20:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.857 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:22.116 20:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:22.116 20:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.116 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:22.116 20:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:22.116 { 00:09:22.116 "name": "Nvme0n1p1", 00:09:22.116 "aliases": [ 00:09:22.116 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:22.116 ], 00:09:22.116 "product_name": "GPT Disk", 00:09:22.116 "block_size": 4096, 00:09:22.116 "num_blocks": 774144, 00:09:22.116 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:22.116 "md_size": 64, 00:09:22.116 "md_interleave": false, 00:09:22.116 "dif_type": 0, 00:09:22.116 "assigned_rate_limits": { 00:09:22.116 "rw_ios_per_sec": 0, 00:09:22.116 "rw_mbytes_per_sec": 0, 00:09:22.116 "r_mbytes_per_sec": 0, 00:09:22.116 "w_mbytes_per_sec": 0 00:09:22.116 }, 00:09:22.116 "claimed": false, 00:09:22.116 "zoned": false, 00:09:22.116 "supported_io_types": { 00:09:22.116 "read": true, 00:09:22.116 "write": true, 00:09:22.116 "unmap": true, 00:09:22.116 "write_zeroes": true, 00:09:22.116 "flush": true, 00:09:22.116 "reset": true, 00:09:22.116 "compare": true, 00:09:22.116 "compare_and_write": false, 00:09:22.116 "abort": true, 00:09:22.116 "nvme_admin": false, 00:09:22.116 "nvme_io": false 00:09:22.116 }, 00:09:22.116 "driver_specific": { 00:09:22.116 "gpt": { 00:09:22.116 "base_bdev": "Nvme0n1", 00:09:22.116 "offset_blocks": 256, 00:09:22.116 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:22.116 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:22.116 "partition_name": "SPDK_TEST_first" 00:09:22.116 } 00:09:22.116 } 00:09:22.116 } 00:09:22.116 ]' 00:09:22.116 20:02:29 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:22.116 20:02:29 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:22.116 20:02:29 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:22.116 20:02:29 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:22.116 20:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.116 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:22.116 20:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:22.116 { 00:09:22.116 "name": "Nvme0n1p2", 00:09:22.116 "aliases": [ 00:09:22.116 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:22.116 ], 00:09:22.116 "product_name": "GPT Disk", 00:09:22.116 "block_size": 4096, 00:09:22.116 "num_blocks": 774143, 00:09:22.116 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:22.116 "md_size": 64, 00:09:22.116 "md_interleave": false, 00:09:22.116 "dif_type": 0, 00:09:22.116 "assigned_rate_limits": { 00:09:22.116 "rw_ios_per_sec": 0, 00:09:22.116 "rw_mbytes_per_sec": 0, 00:09:22.116 "r_mbytes_per_sec": 0, 00:09:22.116 "w_mbytes_per_sec": 0 00:09:22.116 }, 00:09:22.116 "claimed": false, 00:09:22.116 "zoned": false, 00:09:22.116 "supported_io_types": { 00:09:22.116 "read": true, 00:09:22.116 "write": true, 00:09:22.116 "unmap": true, 00:09:22.116 "write_zeroes": true, 00:09:22.116 "flush": true, 00:09:22.116 "reset": true, 00:09:22.116 "compare": true, 00:09:22.116 "compare_and_write": false, 00:09:22.116 "abort": true, 00:09:22.116 "nvme_admin": false, 00:09:22.116 "nvme_io": false 00:09:22.116 }, 00:09:22.116 "driver_specific": { 00:09:22.116 "gpt": { 00:09:22.116 "base_bdev": "Nvme0n1", 00:09:22.116 "offset_blocks": 774400, 00:09:22.116 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:22.116 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:22.116 "partition_name": "SPDK_TEST_second" 00:09:22.116 } 00:09:22.116 } 00:09:22.116 } 00:09:22.116 ]' 00:09:22.116 20:02:29 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:22.116 20:02:29 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:22.116 20:02:29 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:22.116 20:02:29 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:22.116 20:02:29 -- bdev/blockdev.sh@629 -- # killprocess 62743 00:09:22.116 20:02:29 -- common/autotest_common.sh@936 -- # '[' -z 62743 ']' 00:09:22.116 20:02:29 -- common/autotest_common.sh@940 -- # kill -0 62743 00:09:22.116 20:02:29 -- common/autotest_common.sh@941 -- # uname 00:09:22.116 20:02:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:22.116 20:02:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62743 00:09:22.116 killing process with pid 62743 00:09:22.116 20:02:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:22.116 20:02:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:22.116 20:02:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62743' 00:09:22.116 20:02:29 -- common/autotest_common.sh@955 -- # kill 62743 00:09:22.116 20:02:29 -- common/autotest_common.sh@960 -- # wait 62743 00:09:24.017 ************************************ 00:09:24.017 END TEST bdev_gpt_uuid 00:09:24.017 ************************************ 00:09:24.017 00:09:24.017 real 0m3.580s 00:09:24.017 user 0m3.841s 00:09:24.017 sys 0m0.391s 00:09:24.017 20:02:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.017 20:02:31 -- common/autotest_common.sh@10 -- # set +x 00:09:24.017 20:02:31 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:24.017 20:02:31 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:24.017 20:02:31 -- bdev/blockdev.sh@809 -- # cleanup 00:09:24.017 20:02:31 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:24.017 20:02:31 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:24.017 20:02:31 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:24.017 20:02:31 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:24.017 20:02:31 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:24.017 20:02:31 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.275 Waiting for block devices as requested 00:09:24.275 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.275 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.275 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.534 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.800 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:29.800 20:02:37 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:29.800 20:02:37 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:29.800 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:29.800 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:29.800 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:29.800 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:29.800 20:02:37 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:29.800 00:09:29.800 real 0m59.206s 00:09:29.800 user 1m16.042s 00:09:29.800 sys 0m7.900s 00:09:29.800 20:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.800 ************************************ 00:09:29.800 END TEST blockdev_nvme_gpt 00:09:29.800 ************************************ 00:09:29.800 20:02:37 -- common/autotest_common.sh@10 -- # set +x 00:09:29.800 20:02:37 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:29.800 20:02:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:29.800 20:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.800 20:02:37 -- common/autotest_common.sh@10 -- # set +x 00:09:29.800 ************************************ 00:09:29.800 START TEST nvme 00:09:29.800 ************************************ 00:09:29.800 20:02:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:30.062 * Looking for test storage... 00:09:30.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.062 20:02:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:30.062 20:02:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:30.062 20:02:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:30.062 20:02:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:30.062 20:02:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:30.062 20:02:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:30.062 20:02:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:30.062 20:02:37 -- scripts/common.sh@335 -- # IFS=.-: 00:09:30.062 20:02:37 -- scripts/common.sh@335 -- # read -ra ver1 00:09:30.062 20:02:37 -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.062 20:02:37 -- scripts/common.sh@336 -- # read -ra ver2 00:09:30.062 20:02:37 -- scripts/common.sh@337 -- # local 'op=<' 00:09:30.062 20:02:37 -- scripts/common.sh@339 -- # ver1_l=2 00:09:30.062 20:02:37 -- scripts/common.sh@340 -- # ver2_l=1 00:09:30.062 20:02:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:30.062 20:02:37 -- scripts/common.sh@343 -- # case "$op" in 00:09:30.062 20:02:37 -- scripts/common.sh@344 -- # : 1 00:09:30.062 20:02:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:30.062 20:02:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.062 20:02:37 -- scripts/common.sh@364 -- # decimal 1 00:09:30.062 20:02:37 -- scripts/common.sh@352 -- # local d=1 00:09:30.062 20:02:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.062 20:02:37 -- scripts/common.sh@354 -- # echo 1 00:09:30.062 20:02:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:30.062 20:02:37 -- scripts/common.sh@365 -- # decimal 2 00:09:30.062 20:02:37 -- scripts/common.sh@352 -- # local d=2 00:09:30.062 20:02:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.062 20:02:37 -- scripts/common.sh@354 -- # echo 2 00:09:30.062 20:02:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:30.062 20:02:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:30.062 20:02:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:30.062 20:02:37 -- scripts/common.sh@367 -- # return 0 00:09:30.062 20:02:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.062 20:02:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:30.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.062 --rc genhtml_branch_coverage=1 00:09:30.062 --rc genhtml_function_coverage=1 00:09:30.062 --rc genhtml_legend=1 00:09:30.062 --rc geninfo_all_blocks=1 00:09:30.062 --rc geninfo_unexecuted_blocks=1 00:09:30.062 00:09:30.062 ' 00:09:30.062 20:02:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:30.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.062 --rc genhtml_branch_coverage=1 00:09:30.062 --rc genhtml_function_coverage=1 00:09:30.062 --rc genhtml_legend=1 00:09:30.062 --rc geninfo_all_blocks=1 00:09:30.062 --rc geninfo_unexecuted_blocks=1 00:09:30.062 00:09:30.062 ' 00:09:30.062 20:02:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:30.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.062 --rc genhtml_branch_coverage=1 00:09:30.062 --rc genhtml_function_coverage=1 00:09:30.062 --rc genhtml_legend=1 00:09:30.062 --rc geninfo_all_blocks=1 00:09:30.062 --rc geninfo_unexecuted_blocks=1 00:09:30.062 00:09:30.062 ' 00:09:30.062 20:02:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:30.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.062 --rc genhtml_branch_coverage=1 00:09:30.062 --rc genhtml_function_coverage=1 00:09:30.062 --rc genhtml_legend=1 00:09:30.062 --rc geninfo_all_blocks=1 00:09:30.062 --rc geninfo_unexecuted_blocks=1 00:09:30.062 00:09:30.062 ' 00:09:30.062 20:02:37 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.007 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.007 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.007 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.267 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.267 20:02:38 -- nvme/nvme.sh@79 -- # uname 00:09:31.267 20:02:38 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:31.267 20:02:38 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:31.267 20:02:38 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:31.267 20:02:38 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:31.267 20:02:38 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:31.267 20:02:38 -- common/autotest_common.sh@1055 -- # echo 0 00:09:31.267 Waiting for stub to ready for secondary processes... 00:09:31.267 20:02:38 -- common/autotest_common.sh@1057 -- # stubpid=63418 00:09:31.267 20:02:38 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:31.267 20:02:38 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:31.267 20:02:38 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:31.267 20:02:38 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63418 ]] 00:09:31.267 20:02:38 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:31.267 [2024-12-16 20:02:38.784505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.267 [2024-12-16 20:02:38.784631] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.202 [2024-12-16 20:02:39.546701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.202 [2024-12-16 20:02:39.720032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.202 [2024-12-16 20:02:39.720265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.202 [2024-12-16 20:02:39.720274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.202 [2024-12-16 20:02:39.738182] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:32.202 [2024-12-16 20:02:39.751580] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:32.202 [2024-12-16 20:02:39.751924] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:32.202 20:02:39 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:32.202 20:02:39 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63418 ]] 00:09:32.202 20:02:39 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:32.202 [2024-12-16 20:02:39.767423] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:32.202 [2024-12-16 20:02:39.767568] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:32.202 [2024-12-16 20:02:39.767652] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:32.202 [2024-12-16 20:02:39.774448] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:32.202 [2024-12-16 20:02:39.774595] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:32.202 [2024-12-16 20:02:39.774684] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:32.202 [2024-12-16 20:02:39.782238] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:32.202 [2024-12-16 20:02:39.782388] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:32.202 [2024-12-16 20:02:39.782485] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:32.202 [2024-12-16 20:02:39.782557] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:32.202 [2024-12-16 20:02:39.782657] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:33.136 done. 00:09:33.136 20:02:40 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:33.136 20:02:40 -- common/autotest_common.sh@1064 -- # echo done. 00:09:33.136 20:02:40 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:33.136 20:02:40 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:33.136 20:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.136 20:02:40 -- common/autotest_common.sh@10 -- # set +x 00:09:33.136 ************************************ 00:09:33.136 START TEST nvme_reset 00:09:33.136 ************************************ 00:09:33.136 20:02:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:33.395 Initializing NVMe Controllers 00:09:33.395 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:33.395 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:33.395 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:33.395 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:33.395 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:33.395 00:09:33.395 real 0m0.196s 00:09:33.395 user 0m0.062s 00:09:33.395 sys 0m0.090s 00:09:33.395 20:02:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.395 20:02:40 -- common/autotest_common.sh@10 -- # set +x 00:09:33.395 ************************************ 00:09:33.395 END TEST nvme_reset 00:09:33.395 ************************************ 00:09:33.395 20:02:41 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:33.395 20:02:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.395 20:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.395 20:02:41 -- common/autotest_common.sh@10 -- # set +x 00:09:33.395 ************************************ 00:09:33.395 START TEST nvme_identify 00:09:33.395 ************************************ 00:09:33.395 20:02:41 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:33.395 20:02:41 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:33.395 20:02:41 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:33.395 20:02:41 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.395 20:02:41 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:33.395 20:02:41 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:33.395 20:02:41 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:33.395 20:02:41 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.656 20:02:41 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.656 20:02:41 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:33.656 20:02:41 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:33.656 20:02:41 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:33.656 20:02:41 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:33.656 [2024-12-16 20:02:41.244357] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63460 terminated unexpected 00:09:33.656 ===================================================== 00:09:33.656 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:33.656 ===================================================== 00:09:33.656 Controller Capabilities/Features 00:09:33.656 ================================ 00:09:33.656 Vendor ID: 1b36 00:09:33.656 Subsystem Vendor ID: 1af4 00:09:33.656 Serial Number: 12343 00:09:33.656 Model Number: QEMU NVMe Ctrl 00:09:33.656 Firmware Version: 8.0.0 00:09:33.656 Recommended Arb Burst: 6 00:09:33.656 IEEE OUI Identifier: 00 54 52 00:09:33.656 Multi-path I/O 00:09:33.656 May have multiple subsystem ports: No 00:09:33.656 May have multiple controllers: Yes 00:09:33.656 Associated with SR-IOV VF: No 00:09:33.656 Max Data Transfer Size: 524288 00:09:33.656 Max Number of Namespaces: 256 00:09:33.656 Max Number of I/O Queues: 64 00:09:33.656 NVMe Specification Version (VS): 1.4 00:09:33.656 NVMe Specification Version (Identify): 1.4 00:09:33.656 Maximum Queue Entries: 2048 00:09:33.656 Contiguous Queues Required: Yes 00:09:33.656 Arbitration Mechanisms Supported 00:09:33.656 Weighted Round Robin: Not Supported 00:09:33.656 Vendor Specific: Not Supported 00:09:33.656 Reset Timeout: 7500 ms 00:09:33.656 Doorbell Stride: 4 bytes 00:09:33.656 NVM Subsystem Reset: Not Supported 00:09:33.656 Command Sets Supported 00:09:33.656 NVM Command Set: Supported 00:09:33.656 Boot Partition: Not Supported 00:09:33.656 Memory Page Size Minimum: 4096 bytes 00:09:33.656 Memory Page Size Maximum: 65536 bytes 00:09:33.656 Persistent Memory Region: Not Supported 00:09:33.656 Optional Asynchronous Events Supported 00:09:33.656 Namespace Attribute Notices: Supported 00:09:33.656 Firmware Activation Notices: Not Supported 00:09:33.656 ANA Change Notices: Not Supported 00:09:33.656 PLE Aggregate Log Change Notices: Not Supported 00:09:33.656 LBA Status Info Alert Notices: Not Supported 00:09:33.656 EGE Aggregate Log Change Notices: Not Supported 00:09:33.656 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.656 Zone Descriptor Change Notices: Not Supported 00:09:33.656 Discovery Log Change Notices: Not Supported 00:09:33.656 Controller Attributes 00:09:33.656 128-bit Host Identifier: Not Supported 00:09:33.656 Non-Operational Permissive Mode: Not Supported 00:09:33.656 NVM Sets: Not Supported 00:09:33.656 Read Recovery Levels: Not Supported 00:09:33.656 Endurance Groups: Supported 00:09:33.656 Predictable Latency Mode: Not Supported 00:09:33.656 Traffic Based Keep ALive: Not Supported 00:09:33.656 Namespace Granularity: Not Supported 00:09:33.656 SQ Associations: Not Supported 00:09:33.656 UUID List: Not Supported 00:09:33.656 Multi-Domain Subsystem: Not Supported 00:09:33.656 Fixed Capacity Management: Not Supported 00:09:33.656 Variable Capacity Management: Not Supported 00:09:33.656 Delete Endurance Group: Not Supported 00:09:33.656 Delete NVM Set: Not Supported 00:09:33.656 Extended LBA Formats Supported: Supported 00:09:33.656 Flexible Data Placement Supported: Supported 00:09:33.656 00:09:33.656 Controller Memory Buffer Support 00:09:33.656 ================================ 00:09:33.656 Supported: No 00:09:33.656 00:09:33.656 Persistent Memory Region Support 00:09:33.656 ================================ 00:09:33.656 Supported: No 00:09:33.656 00:09:33.656 Admin Command Set Attributes 00:09:33.656 ============================ 00:09:33.656 Security Send/Receive: Not Supported 00:09:33.656 Format NVM: Supported 00:09:33.656 Firmware Activate/Download: Not Supported 00:09:33.656 Namespace Management: Supported 00:09:33.656 Device Self-Test: Not Supported 00:09:33.656 Directives: Supported 00:09:33.656 NVMe-MI: Not Supported 00:09:33.656 Virtualization Management: Not Supported 00:09:33.656 Doorbell Buffer Config: Supported 00:09:33.656 Get LBA Status Capability: Not Supported 00:09:33.656 Command & Feature Lockdown Capability: Not Supported 00:09:33.656 Abort Command Limit: 4 00:09:33.656 Async Event Request Limit: 4 00:09:33.656 Number of Firmware Slots: N/A 00:09:33.656 Firmware Slot 1 Read-Only: N/A 00:09:33.656 Firmware Activation Without Reset: N/A 00:09:33.656 Multiple Update Detection Support: N/A 00:09:33.656 Firmware Update Granularity: No Information Provided 00:09:33.656 Per-Namespace SMART Log: Yes 00:09:33.656 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.656 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:33.656 Command Effects Log Page: Supported 00:09:33.656 Get Log Page Extended Data: Supported 00:09:33.656 Telemetry Log Pages: Not Supported 00:09:33.656 Persistent Event Log Pages: Not Supported 00:09:33.656 Supported Log Pages Log Page: May Support 00:09:33.656 Commands Supported & Effects Log Page: Not Supported 00:09:33.656 Feature Identifiers & Effects Log Page:May Support 00:09:33.656 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.656 Data Area 4 for Telemetry Log: Not Supported 00:09:33.656 Error Log Page Entries Supported: 1 00:09:33.656 Keep Alive: Not Supported 00:09:33.656 00:09:33.656 NVM Command Set Attributes 00:09:33.656 ========================== 00:09:33.656 Submission Queue Entry Size 00:09:33.656 Max: 64 00:09:33.656 Min: 64 00:09:33.656 Completion Queue Entry Size 00:09:33.656 Max: 16 00:09:33.656 Min: 16 00:09:33.656 Number of Namespaces: 256 00:09:33.656 Compare Command: Supported 00:09:33.656 Write Uncorrectable Command: Not Supported 00:09:33.656 Dataset Management Command: Supported 00:09:33.656 Write Zeroes Command: Supported 00:09:33.656 Set Features Save Field: Supported 00:09:33.656 Reservations: Not Supported 00:09:33.656 Timestamp: Supported 00:09:33.656 Copy: Supported 00:09:33.656 Volatile Write Cache: Present 00:09:33.656 Atomic Write Unit (Normal): 1 00:09:33.656 Atomic Write Unit (PFail): 1 00:09:33.656 Atomic Compare & Write Unit: 1 00:09:33.656 Fused Compare & Write: Not Supported 00:09:33.656 Scatter-Gather List 00:09:33.657 SGL Command Set: Supported 00:09:33.657 SGL Keyed: Not Supported 00:09:33.657 SGL Bit Bucket Descriptor: Not Supported 00:09:33.657 SGL Metadata Pointer: Not Supported 00:09:33.657 Oversized SGL: Not Supported 00:09:33.657 SGL Metadata Address: Not Supported 00:09:33.657 SGL Offset: Not Supported 00:09:33.657 Transport SGL Data Block: Not Supported 00:09:33.657 Replay Protected Memory Block: Not Supported 00:09:33.657 00:09:33.657 Firmware Slot Information 00:09:33.657 ========================= 00:09:33.657 Active slot: 1 00:09:33.657 Slot 1 Firmware Revision: 1.0 00:09:33.657 00:09:33.657 00:09:33.657 Commands Supported and Effects 00:09:33.657 ============================== 00:09:33.657 Admin Commands 00:09:33.657 -------------- 00:09:33.657 Delete I/O Submission Queue (00h): Supported 00:09:33.657 Create I/O Submission Queue (01h): Supported 00:09:33.657 Get Log Page (02h): Supported 00:09:33.657 Delete I/O Completion Queue (04h): Supported 00:09:33.657 Create I/O Completion Queue (05h): Supported 00:09:33.657 Identify (06h): Supported 00:09:33.657 Abort (08h): Supported 00:09:33.657 Set Features (09h): Supported 00:09:33.657 Get Features (0Ah): Supported 00:09:33.657 Asynchronous Event Request (0Ch): Supported 00:09:33.657 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.657 Directive Send (19h): Supported 00:09:33.657 Directive Receive (1Ah): Supported 00:09:33.657 Virtualization Management (1Ch): Supported 00:09:33.657 Doorbell Buffer Config (7Ch): Supported 00:09:33.657 Format NVM (80h): Supported LBA-Change 00:09:33.657 I/O Commands 00:09:33.657 ------------ 00:09:33.657 Flush (00h): Supported LBA-Change 00:09:33.657 Write (01h): Supported LBA-Change 00:09:33.657 Read (02h): Supported 00:09:33.657 Compare (05h): Supported 00:09:33.657 Write Zeroes (08h): Supported LBA-Change 00:09:33.657 Dataset Management (09h): Supported LBA-Change 00:09:33.657 Unknown (0Ch): Supported 00:09:33.657 Unknown (12h): Supported 00:09:33.657 Copy (19h): Supported LBA-Change 00:09:33.657 Unknown (1Dh): Supported LBA-Change 00:09:33.657 00:09:33.657 Error Log 00:09:33.657 ========= 00:09:33.657 00:09:33.657 Arbitration 00:09:33.657 =========== 00:09:33.657 Arbitration Burst: no limit 00:09:33.657 00:09:33.657 Power Management 00:09:33.657 ================ 00:09:33.657 Number of Power States: 1 00:09:33.657 Current Power State: Power State #0 00:09:33.657 Power State #0: 00:09:33.657 Max Power: 25.00 W 00:09:33.657 Non-Operational State: Operational 00:09:33.657 Entry Latency: 16 microseconds 00:09:33.657 Exit Latency: 4 microseconds 00:09:33.657 Relative Read Throughput: 0 00:09:33.657 Relative Read Latency: 0 00:09:33.657 Relative Write Throughput: 0 00:09:33.657 Relative Write Latency: 0 00:09:33.657 Idle Power: Not Reported 00:09:33.657 Active Power: Not Reported 00:09:33.657 Non-Operational Permissive Mode: Not Supported 00:09:33.657 00:09:33.657 Health Information 00:09:33.657 ================== 00:09:33.657 Critical Warnings: 00:09:33.657 Available Spare Space: OK 00:09:33.657 Temperature: OK 00:09:33.657 Device Reliability: OK 00:09:33.657 Read Only: No 00:09:33.657 Volatile Memory Backup: OK 00:09:33.657 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.657 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.657 Available Spare: 0% 00:09:33.657 Available Spare Threshold: 0% 00:09:33.657 Life Percentage Used: 0% 00:09:33.657 Data Units Read: 1428 00:09:33.657 Data Units Written: 666 00:09:33.657 Host Read Commands: 62334 00:09:33.657 Host Write Commands: 30680 00:09:33.657 Controller Busy Time: 0 minutes 00:09:33.657 Power Cycles: 0 00:09:33.657 Power On Hours: 0 hours 00:09:33.657 Unsafe Shutdowns: 0 00:09:33.657 Unrecoverable Media Errors: 0 00:09:33.657 Lifetime Error Log Entries: 0 00:09:33.657 Warning Temperature Time: 0 minutes 00:09:33.657 Critical Temperature Time: 0 minutes 00:09:33.657 00:09:33.657 Number of Queues 00:09:33.657 ================ 00:09:33.657 Number of I/O Submission Queues: 64 00:09:33.657 Number of I/O Completion Queues: 64 00:09:33.657 00:09:33.657 ZNS Specific Controller Data 00:09:33.657 ============================ 00:09:33.657 Zone Append Size Limit: 0 00:09:33.657 00:09:33.657 00:09:33.657 Active Namespaces 00:09:33.657 ================= 00:09:33.657 Namespace ID:1 00:09:33.657 Error Recovery Timeout: Unlimited 00:09:33.657 Command Set Identifier: NVM (00h) 00:09:33.657 Deallocate: Supported 00:09:33.657 Deallocated/Unwritten Error: Supported 00:09:33.657 Deallocated Read Value: All 0x00 00:09:33.657 Deallocate in Write Zeroes: Not Supported 00:09:33.657 Deallocated Guard Field: 0xFFFF 00:09:33.657 Flush: Supported 00:09:33.657 Reservation: Not Supported 00:09:33.657 Namespace Sharing Capabilities: Multiple Controllers 00:09:33.657 Size (in LBAs): 262144 (1GiB) 00:09:33.657 Capacity (in LBAs): 262144 (1GiB) 00:09:33.657 Utilization (in LBAs): 262144 (1GiB) 00:09:33.657 Thin Provisioning: Not Supported 00:09:33.657 Per-NS Atomic Units: No 00:09:33.657 Maximum Single Source Range Length: 128 00:09:33.657 Maximum Copy Length: 128 00:09:33.657 Maximum Source Range Count: 128 00:09:33.657 NGUID/EUI64 Never Reused: No 00:09:33.657 Namespace Write Protected: No 00:09:33.657 Endurance group ID: 1 00:09:33.657 Number of LBA Formats: 8 00:09:33.657 Current LBA Format: LBA Format #04 00:09:33.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.657 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.657 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.657 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.657 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.657 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.657 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.657 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.657 00:09:33.657 Get Feature FDP: 00:09:33.657 ================ 00:09:33.657 Enabled: Yes 00:09:33.657 FDP configuration index: 0 00:09:33.657 00:09:33.657 FDP configurations log page 00:09:33.657 =========================== 00:09:33.657 Number of FDP configurations: 1 00:09:33.657 Version: 0 00:09:33.657 Size: 112 00:09:33.657 FDP Configuration Descriptor: 0 00:09:33.657 Descriptor Size: 96 00:09:33.657 Reclaim Group Identifier format: 2 00:09:33.657 FDP Volatile Write Cache: Not Present 00:09:33.657 FDP Configuration: Valid 00:09:33.657 Vendor Specific Size: 0 00:09:33.657 Number of Reclaim Groups: 2 00:09:33.657 Number of Recalim Unit Handles: 8 00:09:33.657 Max Placement Identifiers: 128 00:09:33.657 Number of Namespaces Suppprted: 256 00:09:33.657 Reclaim unit Nominal Size: 6000000 bytes 00:09:33.657 Estimated Reclaim Unit Time Limit: Not Reported 00:09:33.657 RUH Desc #000: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #001: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #002: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #003: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #004: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #005: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #006: RUH Type: Initially Isolated 00:09:33.657 RUH Desc #007: RUH Type: Initially Isolated 00:09:33.657 00:09:33.657 FDP reclaim unit handle usage log page 00:09:33.657 =================================[2024-12-16 20:02:41.246202] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63460 terminated unexpected 00:09:33.657 ===== 00:09:33.657 Number of Reclaim Unit Handles: 8 00:09:33.657 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:33.657 RUH Usage Desc #001: RUH Attributes: Unused 00:09:33.657 RUH Usage Desc #002: RUH Attributes: Unused 00:09:33.657 RUH Usage Desc #003: RUH Attributes: Unused 00:09:33.657 RUH Usage Desc #004: RUH Attributes: Unused 00:09:33.657 RUH Usage Desc #005: RUH Attributes: Unused 00:09:33.657 RUH Usage Desc #006: RUH Attributes: Unused 00:09:33.657 RUH Usage Desc #007: RUH Attributes: Unused 00:09:33.657 00:09:33.657 FDP statistics log page 00:09:33.657 ======================= 00:09:33.657 Host bytes with metadata written: 434274304 00:09:33.657 Media bytes with metadata written: 434364416 00:09:33.657 Media bytes erased: 0 00:09:33.657 00:09:33.657 FDP events log page 00:09:33.657 =================== 00:09:33.657 Number of FDP events: 0 00:09:33.657 00:09:33.657 ===================================================== 00:09:33.657 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:33.657 ===================================================== 00:09:33.657 Controller Capabilities/Features 00:09:33.657 ================================ 00:09:33.657 Vendor ID: 1b36 00:09:33.657 Subsystem Vendor ID: 1af4 00:09:33.657 Serial Number: 12340 00:09:33.657 Model Number: QEMU NVMe Ctrl 00:09:33.657 Firmware Version: 8.0.0 00:09:33.657 Recommended Arb Burst: 6 00:09:33.657 IEEE OUI Identifier: 00 54 52 00:09:33.657 Multi-path I/O 00:09:33.657 May have multiple subsystem ports: No 00:09:33.657 May have multiple controllers: No 00:09:33.657 Associated with SR-IOV VF: No 00:09:33.657 Max Data Transfer Size: 524288 00:09:33.657 Max Number of Namespaces: 256 00:09:33.657 Max Number of I/O Queues: 64 00:09:33.657 NVMe Specification Version (VS): 1.4 00:09:33.657 NVMe Specification Version (Identify): 1.4 00:09:33.658 Maximum Queue Entries: 2048 00:09:33.658 Contiguous Queues Required: Yes 00:09:33.658 Arbitration Mechanisms Supported 00:09:33.658 Weighted Round Robin: Not Supported 00:09:33.658 Vendor Specific: Not Supported 00:09:33.658 Reset Timeout: 7500 ms 00:09:33.658 Doorbell Stride: 4 bytes 00:09:33.658 NVM Subsystem Reset: Not Supported 00:09:33.658 Command Sets Supported 00:09:33.658 NVM Command Set: Supported 00:09:33.658 Boot Partition: Not Supported 00:09:33.658 Memory Page Size Minimum: 4096 bytes 00:09:33.658 Memory Page Size Maximum: 65536 bytes 00:09:33.658 Persistent Memory Region: Not Supported 00:09:33.658 Optional Asynchronous Events Supported 00:09:33.658 Namespace Attribute Notices: Supported 00:09:33.658 Firmware Activation Notices: Not Supported 00:09:33.658 ANA Change Notices: Not Supported 00:09:33.658 PLE Aggregate Log Change Notices: Not Supported 00:09:33.658 LBA Status Info Alert Notices: Not Supported 00:09:33.658 EGE Aggregate Log Change Notices: Not Supported 00:09:33.658 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.658 Zone Descriptor Change Notices: Not Supported 00:09:33.658 Discovery Log Change Notices: Not Supported 00:09:33.658 Controller Attributes 00:09:33.658 128-bit Host Identifier: Not Supported 00:09:33.658 Non-Operational Permissive Mode: Not Supported 00:09:33.658 NVM Sets: Not Supported 00:09:33.658 Read Recovery Levels: Not Supported 00:09:33.658 Endurance Groups: Not Supported 00:09:33.658 Predictable Latency Mode: Not Supported 00:09:33.658 Traffic Based Keep ALive: Not Supported 00:09:33.658 Namespace Granularity: Not Supported 00:09:33.658 SQ Associations: Not Supported 00:09:33.658 UUID List: Not Supported 00:09:33.658 Multi-Domain Subsystem: Not Supported 00:09:33.658 Fixed Capacity Management: Not Supported 00:09:33.658 Variable Capacity Management: Not Supported 00:09:33.658 Delete Endurance Group: Not Supported 00:09:33.658 Delete NVM Set: Not Supported 00:09:33.658 Extended LBA Formats Supported: Supported 00:09:33.658 Flexible Data Placement Supported: Not Supported 00:09:33.658 00:09:33.658 Controller Memory Buffer Support 00:09:33.658 ================================ 00:09:33.658 Supported: No 00:09:33.658 00:09:33.658 Persistent Memory Region Support 00:09:33.658 ================================ 00:09:33.658 Supported: No 00:09:33.658 00:09:33.658 Admin Command Set Attributes 00:09:33.658 ============================ 00:09:33.658 Security Send/Receive: Not Supported 00:09:33.658 Format NVM: Supported 00:09:33.658 Firmware Activate/Download: Not Supported 00:09:33.658 Namespace Management: Supported 00:09:33.658 Device Self-Test: Not Supported 00:09:33.658 Directives: Supported 00:09:33.658 NVMe-MI: Not Supported 00:09:33.658 Virtualization Management: Not Supported 00:09:33.658 Doorbell Buffer Config: Supported 00:09:33.658 Get LBA Status Capability: Not Supported 00:09:33.658 Command & Feature Lockdown Capability: Not Supported 00:09:33.658 Abort Command Limit: 4 00:09:33.658 Async Event Request Limit: 4 00:09:33.658 Number of Firmware Slots: N/A 00:09:33.658 Firmware Slot 1 Read-Only: N/A 00:09:33.658 Firmware Activation Without Reset: N/A 00:09:33.658 Multiple Update Detection Support: N/A 00:09:33.658 Firmware Update Granularity: No Information Provided 00:09:33.658 Per-Namespace SMART Log: Yes 00:09:33.658 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.658 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:33.658 Command Effects Log Page: Supported 00:09:33.658 Get Log Page Extended Data: Supported 00:09:33.658 Telemetry Log Pages: Not Supported 00:09:33.658 Persistent Event Log Pages: Not Supported 00:09:33.658 Supported Log Pages Log Page: May Support 00:09:33.658 Commands Supported & Effects Log Page: Not Supported 00:09:33.658 Feature Identifiers & Effects Log Page:May Support 00:09:33.658 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.658 Data Area 4 for Telemetry Log: Not Supported 00:09:33.658 Error Log Page Entries Supported: 1 00:09:33.658 Keep Alive: Not Supported 00:09:33.658 00:09:33.658 NVM Command Set Attributes 00:09:33.658 ========================== 00:09:33.658 Submission Queue Entry Size 00:09:33.658 Max: 64 00:09:33.658 Min: 64 00:09:33.658 Completion Queue Entry Size 00:09:33.658 Max: 16 00:09:33.658 Min: 16 00:09:33.658 Number of Namespaces: 256 00:09:33.658 Compare Command: Supported 00:09:33.658 Write Uncorrectable Command: Not Supported 00:09:33.658 Dataset Management Command: Supported 00:09:33.658 Write Zeroes Command: Supported 00:09:33.658 Set Features Save Field: Supported 00:09:33.658 Reservations: Not Supported 00:09:33.658 Timestamp: Supported 00:09:33.658 Copy: Supported 00:09:33.658 Volatile Write Cache: Present 00:09:33.658 Atomic Write Unit (Normal): 1 00:09:33.658 Atomic Write Unit (PFail): 1 00:09:33.658 Atomic Compare & Write Unit: 1 00:09:33.658 Fused Compare & Write: Not Supported 00:09:33.658 Scatter-Gather List 00:09:33.658 SGL Command Set: Supported 00:09:33.658 SGL Keyed: Not Supported 00:09:33.658 SGL Bit Bucket Descriptor: Not Supported 00:09:33.658 SGL Metadata Pointer: Not Supported 00:09:33.658 Oversized SGL: Not Supported 00:09:33.658 SGL Metadata Address: Not Supported 00:09:33.658 SGL Offset: Not Supported 00:09:33.658 Transport SGL Data Block: Not Supported 00:09:33.658 Replay Protected Memory Block: Not Supported 00:09:33.658 00:09:33.658 Firmware Slot Information 00:09:33.658 ========================= 00:09:33.658 Active slot: 1 00:09:33.658 Slot 1 Firmware Revision: 1.0 00:09:33.658 00:09:33.658 00:09:33.658 Commands Supported and Effects 00:09:33.658 ============================== 00:09:33.658 Admin Commands 00:09:33.658 -------------- 00:09:33.658 Delete I/O Submission Queue (00h): Supported 00:09:33.658 Create I/O Submission Queue (01h): Supported 00:09:33.658 Get Log Page (02h): Supported 00:09:33.658 Delete I/O Completion Queue (04h): Supported 00:09:33.658 Create I/O Completion Queue (05h): Supported 00:09:33.658 Identify (06h): Supported 00:09:33.658 Abort (08h): Supported 00:09:33.658 Set Features (09h): Supported 00:09:33.658 Get Features (0Ah): Supported 00:09:33.658 Asynchronous Event Request (0Ch): Supported 00:09:33.658 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.658 Directive Send (19h): Supported 00:09:33.658 Directive Receive (1Ah): Supported 00:09:33.658 Virtualization Management (1Ch): Supported 00:09:33.658 Doorbell Buffer Config (7Ch): Supported 00:09:33.658 Format NVM (80h): Supported LBA-Change 00:09:33.658 I/O Commands 00:09:33.658 ------------ 00:09:33.658 Flush (00h): Supported LBA-Change 00:09:33.658 Write (01h): Supported LBA-Change 00:09:33.658 Read (02h): Supported 00:09:33.658 Compare (05h): Supported 00:09:33.658 Write Zeroes (08h): Supported LBA-Change 00:09:33.658 Dataset Management (09h): Supported LBA-Change 00:09:33.658 Unknown (0Ch): Supported 00:09:33.658 Unknown (12h): Supported 00:09:33.658 Copy (19h): Supported LBA-Change 00:09:33.658 Unknown (1Dh): Supported LBA-Change 00:09:33.658 00:09:33.658 Error Log 00:09:33.658 ========= 00:09:33.658 00:09:33.658 Arbitration 00:09:33.658 =========== 00:09:33.658 Arbitration Burst: no limit 00:09:33.658 00:09:33.658 Power Management 00:09:33.658 ================ 00:09:33.658 Number of Power States: 1 00:09:33.658 Current Power State: Power State #0 00:09:33.658 Power State #0: 00:09:33.658 Max Power: 25.00 W 00:09:33.658 Non-Operational State: Operational 00:09:33.658 Entry Latency: 16 microseconds 00:09:33.658 Exit Latency: 4 microseconds 00:09:33.658 Relative Read Throughput: 0 00:09:33.658 Relative Read Latency: 0 00:09:33.658 Relative Write Throughput: 0 00:09:33.658 Relative Write Latency: 0 00:09:33.658 Idle Power: Not Reported 00:09:33.658 Active Power: Not Reported 00:09:33.658 Non-Operational Permissive Mode: Not Supported 00:09:33.658 00:09:33.658 Health Information 00:09:33.658 ================== 00:09:33.658 Critical Warnings: 00:09:33.658 Available Spare Space: OK 00:09:33.658 Temperature: OK 00:09:33.658 Device Reliability: OK 00:09:33.658 Read Only: No 00:09:33.658 Volatile Memory Backup: OK 00:09:33.658 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.658 Available Spare: 0% 00:09:33.658 Available Spare Threshold: 0% 00:09:33.658 Life Percentage Used: 0% 00:09:33.658 Data Units Read: 1869 00:09:33.658 Data Units Written: 867 00:09:33.658 Host Read Commands: 91846 00:09:33.658 Host Write Commands: 45719 00:09:33.658 Controller Busy Time: 0 minutes 00:09:33.658 Power Cycles: 0 00:09:33.658 Power On Hours: 0 hours 00:09:33.658 Unsafe Shutdowns: 0 00:09:33.658 Unrecoverable Media Errors: 0 00:09:33.658 Lifetime Error Log Entries: 0 00:09:33.658 Warning Temperature Time: 0 minutes 00:09:33.658 Critical Temperature Time: 0 minutes 00:09:33.658 00:09:33.658 Number of Queues 00:09:33.658 ================ 00:09:33.658 Number of I/O Submission Queues: 64 00:09:33.658 Number of I/O Completion Queues: 64 00:09:33.658 00:09:33.658 ZNS Specific Controller Data 00:09:33.658 ============================ 00:09:33.658 Zone Append Size Limit: 0 00:09:33.659 00:09:33.659 00:09:33.659 Active Namespaces 00:09:33.659 ================= 00:09:33.659 Namespace ID:1 00:09:33.659 Error Recovery Timeout: Unlimited 00:09:33.659 Command Set Identifier: NVM (00h) 00:09:33.659 Deallocate: Supported 00:09:33.659 Deallocated/Unwritten Error: Supported 00:09:33.659 Deallocated Read Value: All 0x00 00:09:33.659 Deallocate in Write Zeroes: Not Supported 00:09:33.659 Deallocated Guard Field: 0xFFFF 00:09:33.659 Flush: Supported 00:09:33.659 Reservation: Not Supported 00:09:33.659 Metadata Transferred as: Separate Metadata Buffer 00:09:33.659 Namespace Sharing Capabilities: Private 00:09:33.659 Size (in LBAs): 1548666 (5GiB) 00:09:33.659 Capacity (in LBAs): 1548666 (5GiB) 00:09:33.659 Utilization (in LBAs): 1548666 (5GiB) 00:09:33.659 Thin Provisioning: Not Supported 00:09:33.659 Per-NS Atomic Units: No 00:09:33.659 Maximum Single Source Range Length: 128 00:09:33.659 Maximum Copy Length: 128 00:09:33.659 Maximum Source Range Count: 128 00:09:33.659 NGUID/EUI64 Never Reused: No 00:09:33.659 Namespace Write Protected: No 00:09:33.659 Number of LBA Formats: 8 00:09:33.659 Current LBA Format: LBA Format #07 00:09:33.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.659 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.659 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.659 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.659 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.659 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.659 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.659 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.659 00:09:33.659 ===================================================== 00:09:33.659 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:33.659 ===================================================== 00:09:33.659 Controller Capabilities/Features 00:09:33.659 ================================ 00:09:33.659 Vendor ID: 1b36 00:09:33.659 Subsystem Vendor ID: 1af4 00:09:33.659 Serial Number: 12341 00:09:33.659 Model Number: QEMU NVMe Ctrl 00:09:33.659 Firmware Version: 8.0.0 00:09:33.659 Recommended Arb Burst: 6 00:09:33.659 IEEE OUI Identifier: 00 54 52 00:09:33.659 Multi-path I/O 00:09:33.659 May have multiple subsystem ports: No 00:09:33.659 May have multiple controllers: No 00:09:33.659 Associated with SR-IOV VF: No 00:09:33.659 Max Data Transfer Size: 524288 00:09:33.659 Max Number of Namespaces: 256 00:09:33.659 Max Number of I/O Queues: 64 00:09:33.659 NVMe Specification Version (VS): 1.4 00:09:33.659 NVMe Specification Version (Identify): 1.4 00:09:33.659 Maximum Queue Entries: 2048 00:09:33.659 Contiguous Queues Required: Yes 00:09:33.659 Arbitration Mechanisms Supported 00:09:33.659 Weighted Round Robin: Not Supported 00:09:33.659 Vendor Specific: Not Supported 00:09:33.659 Reset Timeout: 7500 ms 00:09:33.659 Doorbell Stride: 4 bytes 00:09:33.659 NVM Subsystem Reset: Not Supported 00:09:33.659 Command Sets Supported 00:09:33.659 NVM Command Set: Supported 00:09:33.659 Boot Partition: Not Supported 00:09:33.659 Memory Page Size Minimum: 4096 bytes 00:09:33.659 Memory Page Size Maximum: 65536 bytes 00:09:33.659 Persistent Memory Region: Not Supported 00:09:33.659 Optional Asynchronous Events Supported 00:09:33.659 Namespace Attribute Notices: Supported 00:09:33.659 Firmware Activation Notices: Not Supported 00:09:33.659 ANA Change Notices: Not Supported 00:09:33.659 PLE Aggregate Log Change Notices: Not Supported 00:09:33.659 LBA Status Info Alert Notices: Not Supported 00:09:33.659 EGE Aggregate Log Change Notices: Not Supported 00:09:33.659 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.659 Zone Descriptor Change Notices: Not Supported 00:09:33.659 Discovery Log Change Notices: Not Supported 00:09:33.659 Controller Attributes 00:09:33.659 128-bit Host Identifier: Not Supported 00:09:33.659 Non-Operational Permissive Mode: Not Supported 00:09:33.659 NVM Sets: Not Supported 00:09:33.659 Read Recovery Levels: Not Supported 00:09:33.659 Endurance Groups: Not Supported 00:09:33.659 Predictable Latency Mode: Not Supported 00:09:33.659 Traffic Based Keep ALive: Not Supported 00:09:33.659 Namespace Granularity: Not Supported 00:09:33.659 SQ Associations: Not Supported 00:09:33.659 UUID List: Not Supported 00:09:33.659 Multi-Domain Subsystem: Not Supported 00:09:33.659 Fixed Capacity Management: Not Supported 00:09:33.659 Variable Capacity Management: Not Supported 00:09:33.659 Delete Endurance Group: Not Supported 00:09:33.659 Delete NVM Set: Not Supported 00:09:33.659 Extended LBA Formats Supported: Supported 00:09:33.659 Flexible Data Placement Supported: Not Supported 00:09:33.659 00:09:33.659 Controller Memory Buffer Support 00:09:33.659 ================================ 00:09:33.659 Supported: No 00:09:33.659 00:09:33.659 Persistent Memory Region Support 00:09:33.659 ================================ 00:09:33.659 Supported: No 00:09:33.659 00:09:33.659 Admin Command Set Attributes 00:09:33.659 ============================ 00:09:33.659 Security Send/Receive: Not Supported 00:09:33.659 Format NVM: Supported 00:09:33.659 Firmware Activate/Download: Not Supported 00:09:33.659 Namespace Management: Supported 00:09:33.659 Device Self-Test: Not Supported 00:09:33.659 Directives: Supported 00:09:33.659 NVMe-MI: Not Supported 00:09:33.659 Virtualization Management: Not Supported 00:09:33.659 Doorbell Buffer Config: Supported 00:09:33.659 Get LBA Status Capability: Not Supported 00:09:33.659 Command & Feature Lockdown Capability: Not Supported 00:09:33.659 Abort Command Limit: 4 00:09:33.659 Async Event Request Limit: 4 00:09:33.659 Number of Firmware Slots: N/A 00:09:33.659 Firmware Slot 1 Read-Only: N/A 00:09:33.659 Firmware Activation Without Reset: N/A 00:09:33.659 Multiple Update Detection Support: N/A 00:09:33.659 Firmware Update Granularity: No Information Provided 00:09:33.659 Per-Namespace SMART Log: Yes 00:09:33.659 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.659 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:33.659 Command Effects Log Page: Supported 00:09:33.659 Get Log Page Extended Data: Supported 00:09:33.659 Telemetry Log Pages: Not Supported 00:09:33.659 Persistent Event Log Pages: Not Supported 00:09:33.659 Supported Log Pages Log Page: May Support 00:09:33.659 Commands Supported & Effects Log Page: Not Supported 00:09:33.659 Feature Identifiers & Effects Log Page:May Support 00:09:33.659 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.659 Data Area 4 for Telemetry Log: Not Supported 00:09:33.659 Error Log Page Entries Supported: 1 00:09:33.659 Keep Alive: Not Supported 00:09:33.659 00:09:33.659 NVM Command Set Attributes 00:09:33.659 ========================== 00:09:33.659 Submission Queue Entry Size 00:09:33.659 Max: 64 00:09:33.659 Min: 64 00:09:33.659 Completion Queue Entry Size 00:09:33.659 Max: 16 00:09:33.659 Min: 16 00:09:33.659 Number of Namespaces: 256 00:09:33.659 Compare Command: Supported 00:09:33.659 Write Uncorrectable Command: Not Supported 00:09:33.659 Dataset Management Command: Supported 00:09:33.659 Write Zeroes Command: Supported 00:09:33.659 Set Features Save Field: Supported 00:09:33.659 Reservations: Not Supported 00:09:33.659 Timestamp: Supported 00:09:33.659 Copy: Supported 00:09:33.659 Volatile Write Cache: Present 00:09:33.659 Atomic Write Unit (Normal): 1 00:09:33.659 Atomic Write Unit (PFail): 1 00:09:33.659 Atomic Compare & Write Unit: 1 00:09:33.659 Fused Compare & Write: Not Supported 00:09:33.659 Scatter-Gather List 00:09:33.659 SGL Command Set: Supported 00:09:33.659 SGL Keyed: Not Supported 00:09:33.659 SGL Bit Bucket Descriptor: Not Supported 00:09:33.659 SGL Metadata Pointer: Not Supported 00:09:33.659 Oversized SGL: Not Supported 00:09:33.659 SGL Metadata Address: Not Supported 00:09:33.659 SGL Offset: Not Supported 00:09:33.659 Transport SGL Data Block: Not Supported 00:09:33.659 Replay Protected Memory Block: Not Supported 00:09:33.659 00:09:33.659 Firmware Slot Information 00:09:33.659 ========================= 00:09:33.659 Active slot: 1 00:09:33.659 Slot 1 Firmware Revision: 1.0 00:09:33.659 00:09:33.659 00:09:33.659 Commands Supported and Effects 00:09:33.659 ============================== 00:09:33.659 Admin Commands 00:09:33.659 -------------- 00:09:33.659 Delete I/O Submission Queue (00h): Supported 00:09:33.659 Create I/O Submission Queue (01h): Supported 00:09:33.659 Get Log Page (02h): Supported 00:09:33.659 Delete I/O Completion Queue (04h): Supported 00:09:33.659 Create I/O Completion Queue (05h): Supported 00:09:33.659 Identify (06h): Supported 00:09:33.659 Abort (08h): Supported 00:09:33.659 Set Features (09h): Supported 00:09:33.659 Get Features (0Ah): Supported 00:09:33.659 Asynchronous Event Request (0Ch): Supported 00:09:33.659 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.659 Directive Send (19h): Supported 00:09:33.659 Directive Receive (1Ah): Supported 00:09:33.659 Virtualization Management (1Ch): Supported 00:09:33.659 Doorbell Buffer Config (7Ch): Supported 00:09:33.660 Format NVM (80h): Supported LBA-Change 00:09:33.660 I/O Commands 00:09:33.660 ------------ 00:09:33.660 Flush (00h): Supported LBA-Change 00:09:33.660 Write (01h): Supported LBA-Change 00:09:33.660 Read (02h): Supported 00:09:33.660 Compare (05h): Supported 00:09:33.660 Write Zeroes (08h): Supported LBA-Change 00:09:33.660 Dataset Management (09h): Supported LBA-Change 00:09:33.660 Unknown (0Ch): Supported 00:09:33.660 Unknown (12h): Supported 00:09:33.660 Copy (19h): Supported LBA-Change 00:09:33.660 Unknown (1Dh): Supported LBA-Change 00:09:33.660 00:09:33.660 Error Log 00:09:33.660 ========= 00:09:33.660 00:09:33.660 Arbitration 00:09:33.660 =========== 00:09:33.660 Arbitration Burst: no limit 00:09:33.660 00:09:33.660 Power Management 00:09:33.660 ================ 00:09:33.660 Number of Power States: 1 00:09:33.660 Current Power State: Power State #0 00:09:33.660 Power State #0: 00:09:33.660 Max Power: 25.00 W 00:09:33.660 Non-Operational State: Operational 00:09:33.660 Entry Latency: 16 microseconds 00:09:33.660 Exit Latency: 4 microseconds 00:09:33.660 Relative Read Throughput: 0 00:09:33.660 Relative Read Latency: 0 00:09:33.660 Relative Write Throughput: 0 00:09:33.660 Relative Write Latency: 0 00:09:33.660 Idle Power: Not Reported 00:09:33.660 Active Power: Not Reported 00:09:33.660 Non-Operational Permissive Mode: Not Supported 00:09:33.660 00:09:33.660 Health Information 00:09:33.660 ================== 00:09:33.660 Critical Warnings: 00:09:33.660 Available Spare Space: OK 00:09:33.660 Temperature: OK 00:09:33.660 Device Reliability: OK 00:09:33.660 Read Only: No 00:09:33.660 Volatile Memory Backup: OK 00:09:33.660 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.660 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.660 Available Spare: 0% 00:09:33.660 Available Spare Threshold: 0% 00:09:33.660 Life Percentage Used: 0% 00:09:33.660 Data Units Read: 1266 00:09:33.660 Data Units Written: 589 00:09:33.660 Host Read Commands: 60861 00:09:33.660 Host Write Commands: 29969 00:09:33.660 Controller Busy Time: 0 minutes 00:09:33.660 Power Cycles: 0 00:09:33.660 Power On Hours: 0 hours 00:09:33.660 Unsafe Shutdowns: 0 00:09:33.660 Unrecoverable Media Errors: 0 00:09:33.660 Lifetime Error Log Entries: 0 00:09:33.660 Warning Temperature Time: 0 minutes 00:09:33.660 Critical Temperature Time: 0 minutes 00:09:33.660 00:09:33.660 Number of Queues 00:09:33.660 ================ 00:09:33.660 Number of I/O Submission Queues: 64 00:09:33.660 Number of I/O Completion Queues: 64 00:09:33.660 00:09:33.660 ZNS Specific Controller Data 00:09:33.660 ============================ 00:09:33.660 Zone Append Size Limit: 0 00:09:33.660 00:09:33.660 00:09:33.660 Active Namespaces 00:09:33.660 ================= 00:09:33.660 Namespace ID:1 00:09:33.660 Error Recovery Timeout: Unlimited 00:09:33.660 Command Set Identifier: NVM (00h) 00:09:33.660 Deallocate: Supported 00:09:33.660 Deallocated/Unwritten Error: Supported 00:09:33.660 Deallocated Read Value: All 0x00 00:09:33.660 Deallocate in Write Zeroes: Not Supported 00:09:33.660 Deallocated Guard Field: 0xFFFF 00:09:33.660 Flush: Supported 00:09:33.660 Reservation: Not Supported 00:09:33.660 Namespace Sharing Capabilities: Private 00:09:33.660 Size (in LBAs): 1310720 (5GiB) 00:09:33.660 Capacity (in LBAs): 1310720 (5GiB) 00:09:33.660 Utilization (in LBAs): 1310720 (5GiB) 00:09:33.660 Thin Provisioning: Not Supported 00:09:33.660 Per-NS Atomic Units: No 00:09:33.660 Maximum Single Source Range Length: 128 00:09:33.660 Maximum Copy Length: 128 00:09:33.660 Maximum Source Range Count: 128 00:09:33.660 NGUID/EUI64 Never Reused: No 00:09:33.660 Namespace Write Protected: No 00:09:33.660 Number of LBA Formats: 8 00:09:33.660 Current LBA Format: LBA Format #04 00:09:33.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.660 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.660 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.660 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.660 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.660 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.660 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.660 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.660 00:09:33.660 ===================================================== 00:09:33.660 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:33.660 ===================================================== 00:09:33.660 Controller Capabilities/Features 00:09:33.660 ================================ 00:09:33.660 Vendor ID: 1b36 00:09:33.660 Subsystem Vendor ID: 1af4 00:09:33.660 Serial Number: 12342 00:09:33.660 Model Number: QEMU NVMe Ctrl 00:09:33.660 Firmware Version: 8.0.0 00:09:33.660 Recommended Arb Burst: 6 00:09:33.660 IEEE OUI Identifier: 00 54 52 00:09:33.660 Multi-path I/O 00:09:33.660 May have multiple subsystem ports: No 00:09:33.660 May have multiple controllers: No 00:09:33.660 Associated with SR-IOV VF: No 00:09:33.660 Max Data Transfer Size: 524288 00:09:33.660 Max Number of Namespaces: 256 00:09:33.660 Max Number of I/O Queues: 64 00:09:33.660 NVMe Specification Version (VS): 1.4 00:09:33.660 NVMe Specification Version (Identify): 1.4 00:09:33.660 Maximum Queue Entries: 2048 00:09:33.660 Contiguous Queues Required: Yes 00:09:33.660 Arbitration Mechanisms Supported 00:09:33.660 Weighted Round Robin: Not Supported 00:09:33.660 Vendor Specific: Not Supported 00:09:33.660 Reset Timeout: 7500 ms 00:09:33.660 Doorbell Stride: 4 bytes 00:09:33.660 NVM Subsystem Reset: Not Supported 00:09:33.660 Command Sets Supported 00:09:33.660 NVM Command Set: Supported 00:09:33.660 Boot Partition: Not Supported 00:09:33.660 Memory Page Size Minimum: 4096 bytes 00:09:33.660 Memory Page Size Maximum: 65536 bytes 00:09:33.660 Persistent Memory Region: Not Supported 00:09:33.660 Optional Asynchronous Events Supported 00:09:33.660 Namespace Attribute Notices: Supported 00:09:33.660 Firmware Activation Notices: Not Supported 00:09:33.660 ANA Change Notices: Not Supported 00:09:33.660 PLE Aggregate Log Change Notices: Not Supported 00:09:33.660 LBA Status Info Alert Notices: Not Supported 00:09:33.660 EGE Aggregate Log Change Notices: Not Supported 00:09:33.660 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.660 Zone Descriptor Change Notices: Not Supported 00:09:33.660 Discovery Log Change Notices: Not Supported 00:09:33.660 Controller Attributes 00:09:33.660 128-bit Host Identifier: Not Supported 00:09:33.660 Non-Operational Permissive Mode: Not Supported 00:09:33.660 NVM Sets: Not Supported 00:09:33.660 Read Recovery Levels: Not Supported 00:09:33.660 Endurance Groups: Not Supported 00:09:33.660 Predictable Latency Mode: Not Supported 00:09:33.660 Traffic Based Keep ALive: Not Supported 00:09:33.660 Namespace Granularity: Not Supported 00:09:33.660 SQ Associations: Not Supported 00:09:33.660 UUID List: Not Supported 00:09:33.660 Multi-Domain Subsystem: Not Supported 00:09:33.660 Fixed Capacity Management: Not Supported 00:09:33.660 Variable Capacity Management: Not Supported 00:09:33.660 Delete Endurance Group: Not Supported 00:09:33.660 Delete NVM Set: Not Supported 00:09:33.660 Extended LBA Formats Supported: Supported 00:09:33.660 Flexible Data Placement Supported: Not Supported 00:09:33.660 00:09:33.660 Controller Memory Buffer Support 00:09:33.660 ================================ 00:09:33.660 Supported: No 00:09:33.660 00:09:33.660 Persistent Memory Region Support 00:09:33.660 ================================ 00:09:33.660 Supported: No 00:09:33.660 00:09:33.660 Admin Command Set Attributes 00:09:33.660 ============================ 00:09:33.660 Security Send/Receive: Not Supported 00:09:33.660 Format NVM: Supported 00:09:33.660 Firmware Activate/Download: Not Supported 00:09:33.660 Namespace Management: Supported 00:09:33.660 Device Self-Test: Not Supported 00:09:33.660 Directives: Supported 00:09:33.660 NVMe-MI: Not Supported 00:09:33.660 Virtualization Management: Not Supported 00:09:33.660 Doorbell Buffer Config: Supported 00:09:33.660 Get LBA Status Capability: Not Supported 00:09:33.660 Command & Feature Lockdown Capability: Not Supported 00:09:33.660 Abort Command Limit: 4 00:09:33.661 Async Event Request Limit: 4 00:09:33.661 Number of Firmware Slots: N/A 00:09:33.661 Firmware Slot 1 Read-Only: N/A 00:09:33.661 Firmware Activation Without Reset: N/A 00:09:33.661 Multiple Update Detection Support: N/A 00:09:33.661 Firmware Update Granularity: No Information Provided 00:09:33.661 Per-Namespace SMART Log: Yes 00:09:33.661 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.661 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:33.661 Command Effects Log Page: Supported 00:09:33.661 Get Log Page Extended Data: Supported 00:09:33.661 Telemetry Log Pages: Not Supported 00:09:33.661 Persistent Event Log Pages: Not Supported 00:09:33.661 Supported Log Pages Log Page: May Support 00:09:33.661 Commands Supported & Effects Log Page: Not Supported 00:09:33.661 Feature Identifiers & Effects Log Page:May Support 00:09:33.661 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.661 Data Area 4 for Telemetry Log: Not Supported 00:09:33.661 Error Log Page Entries Supported: 1 00:09:33.661 Keep Alive: Not Supported 00:09:33.661 00:09:33.661 NVM Command Set Attributes 00:09:33.661 ========================== 00:09:33.661 Submission Queue Entry Size 00:09:33.661 Max: 64 00:09:33.661 Min: 64 00:09:33.661 Completion Queue Entry Size 00:09:33.661 Max: 16 00:09:33.661 Min: 16 00:09:33.661 Number of Namespaces: 256 00:09:33.661 Compare Command: Supported 00:09:33.661 Write Uncorrectable Command: Not Supported 00:09:33.661 Dataset Management Command: Supported 00:09:33.661 Write Zeroes Command: Supported 00:09:33.661 Set Features Save Field: Supported 00:09:33.661 Reservations: Not Supported 00:09:33.661 Timestamp: Supported 00:09:33.661 Copy: Supported 00:09:33.661 Volatile Write Cache: Present 00:09:33.661 Atomic Write Unit (Normal): 1 00:09:33.661 Atomic Write Unit (PFail): 1 00:09:33.661 Atomic Compare & Write Unit: 1 00:09:33.661 Fused Compare & Write: Not Supported 00:09:33.661 Scatter-Gather List 00:09:33.661 SGL Command Set: Supported 00:09:33.661 SGL Keyed: Not Supported 00:09:33.661 SGL Bit Bucket Descriptor: Not Supported 00:09:33.661 SGL Metadata Pointer: Not Supported 00:09:33.661 Oversized SGL: Not Supported 00:09:33.661 SGL Metadata Address: Not Supported 00:09:33.661 SGL Offset: Not Supported 00:09:33.661 Transport SGL Data Block: Not Supported 00:09:33.661 Replay Protected Memory Block: Not Supported 00:09:33.661 00:09:33.661 Firmware Slot Information 00:09:33.661 ========================= 00:09:33.661 Active slot: 1 00:09:33.661 Slot 1 Firmware Revision: 1.0 00:09:33.661 00:09:33.661 00:09:33.661 Commands Supported and Effects 00:09:33.661 ============================== 00:09:33.661 Admin Commands 00:09:33.661 -------------- 00:09:33.661 Delete I/O Submission Queue (00h): Supported 00:09:33.661 Create I/O Submission Queue (01h): Supported 00:09:33.661 Get Log Page (02h): Supported 00:09:33.661 Delete I/O Completion Queue (04h): Supported 00:09:33.661 Create I/O Completion Queue (05h): Supported 00:09:33.661 Identify (06h): Supported 00:09:33.661 Abort (08h): Supported 00:09:33.661 Set Features (09h): Supported 00:09:33.661 Get Features (0Ah): Supported 00:09:33.661 Asynchronous Event Request (0Ch): Supported 00:09:33.661 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.661 Directive Send (19h): Supported 00:09:33.661 Directive Receive (1Ah): Supported 00:09:33.661 Virtualization Management (1Ch): Supported 00:09:33.661 Doorbell Buffer Config (7Ch): Supported 00:09:33.661 Format NVM (80h): Supported LBA-Change 00:09:33.661 I/O Commands 00:09:33.661 ------------ 00:09:33.661 Flush (00h): Supported LBA-Change 00:09:33.661 Write (01h): Supported LBA-Change 00:09:33.661 Read (02h): Supported 00:09:33.661 Compare (05h): Supported 00:09:33.661 Write Zeroes (08h): Supported LBA-Change 00:09:33.661 Dataset Management (09h): Supported LBA-Change 00:09:33.661 Unknown (0Ch): Supported 00:09:33.661 Unknown (12h): Supported 00:09:33.661 Copy (19h): Supported LBA-Change 00:09:33.661 Unknown (1Dh): Supported LBA-Change 00:09:33.661 00:09:33.661 Error Log 00:09:33.661 ========= 00:09:33.661 00:09:33.661 Arbitration 00:09:33.661 =========== 00:09:33.661 Arbitration Burst: no limit 00:09:33.661 00:09:33.661 Power Management 00:09:33.661 ================ 00:09:33.661 Number of Power States: 1 00:09:33.661 Current Power State: Power State #0 00:09:33.661 Power State #0: 00:09:33.661 Max Power: 25.00 W 00:09:33.661 Non-Operational State: Operational 00:09:33.661 Entry Latency: 16 microseconds 00:09:33.661 Exit Latency: 4 microseconds 00:09:33.661 Relative Read Throughput: 0 00:09:33.661 Relative Read Latency: 0 00:09:33.661 Relative Write Throughput: 0 00:09:33.661 Relative Write Latency: 0 00:09:33.661 Idle Power: Not Reported 00:09:33.661 Active Power: Not Reported 00:09:33.661 Non-Operational Permissive Mode: Not Supported 00:09:33.661 00:09:33.661 Health Information 00:09:33.661 ================== 00:09:33.661 Critical Warnings: 00:09:33.661 Available Spare Space: OK 00:09:33.661 Temperature: OK 00:09:33.661 Device Reliability: OK 00:09:33.661 Read Only: No 00:09:33.661 Volatile Memory Backup: OK 00:09:33.661 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.661 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.661 Available Spare: 0% 00:09:33.661 Available Spare Threshold: 0% 00:09:33.661 Life Percentage Used: 0% 00:09:33.661 Data Units Read: 3924 00:09:33.661 Data Units Written: 1822 00:09:33.661 Host Read Commands: 184185 00:09:33.661 Host Write Commands: 90533 00:09:33.661 Controller Busy Time: 0 minutes 00:09:33.661 Power Cycles: 0 00:09:33.661 Power On Hours: 0 hours 00:09:33.661 Unsafe Shutdowns: 0 00:09:33.661 Unrecoverable Media Errors: 0 00:09:33.661 Lifetime Error Log Entries: 0 00:09:33.661 Warning Temperature Time: 0 minutes 00:09:33.661 Critical Temperature Time: 0 minutes 00:09:33.661 00:09:33.661 Number of Queues 00:09:33.661 ================ 00:09:33.661 Number of I/O Submission Queues: 64 00:09:33.661 Number of I/O Completion Queues: 64 00:09:33.661 00:09:33.661 ZNS Specific Controller Data 00:09:33.661 ============================ 00:09:33.661 Zone Append Size Limit: 0 00:09:33.661 00:09:33.661 00:09:33.661 Active Namespaces 00:09:33.661 ================= 00:09:33.661 Namespace ID:1 00:09:33.661 Error Recovery Timeout: Unlimited 00:09:33.661 Command Set Identifier: NVM (00h) 00:09:33.661 Deallocate: Supported 00:09:33.661 Deallocated/Unwritten Error: Supported 00:09:33.661 Deallocated Read Value: All 0x00 00:09:33.661 Deallocate in Write Zeroes: Not Supported 00:09:33.661 Deallocated Guard Field: 0xFFFF 00:09:33.661 Flush: Supported 00:09:33.661 Reservation: Not Supported 00:09:33.661 Namespace Sharing Capabilities: Private 00:09:33.661 Size (in LBAs): 1048576 (4GiB) 00:09:33.661 Capacity (in LBAs): 1048576 (4GiB) 00:09:33.661 Utilization (in LBAs): 1048576 (4GiB) 00:09:33.661 Thin Provisioning: Not Supported 00:09:33.661 Per-NS Atomic Units: No 00:09:33.661 Maximum Si[2024-12-16 20:02:41.246832] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63460 terminated unexpected 00:09:33.661 [2024-12-16 20:02:41.247345] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63460 terminated unexpected 00:09:33.661 ngle Source Range Length: 128 00:09:33.661 Maximum Copy Length: 128 00:09:33.661 Maximum Source Range Count: 128 00:09:33.661 NGUID/EUI64 Never Reused: No 00:09:33.661 Namespace Write Protected: No 00:09:33.661 Number of LBA Formats: 8 00:09:33.661 Current LBA Format: LBA Format #04 00:09:33.661 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.661 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.661 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.661 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.661 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.661 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.661 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.661 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.661 00:09:33.661 Namespace ID:2 00:09:33.661 Error Recovery Timeout: Unlimited 00:09:33.661 Command Set Identifier: NVM (00h) 00:09:33.661 Deallocate: Supported 00:09:33.661 Deallocated/Unwritten Error: Supported 00:09:33.661 Deallocated Read Value: All 0x00 00:09:33.661 Deallocate in Write Zeroes: Not Supported 00:09:33.661 Deallocated Guard Field: 0xFFFF 00:09:33.661 Flush: Supported 00:09:33.661 Reservation: Not Supported 00:09:33.661 Namespace Sharing Capabilities: Private 00:09:33.661 Size (in LBAs): 1048576 (4GiB) 00:09:33.661 Capacity (in LBAs): 1048576 (4GiB) 00:09:33.662 Utilization (in LBAs): 1048576 (4GiB) 00:09:33.662 Thin Provisioning: Not Supported 00:09:33.662 Per-NS Atomic Units: No 00:09:33.662 Maximum Single Source Range Length: 128 00:09:33.662 Maximum Copy Length: 128 00:09:33.662 Maximum Source Range Count: 128 00:09:33.662 NGUID/EUI64 Never Reused: No 00:09:33.662 Namespace Write Protected: No 00:09:33.662 Number of LBA Formats: 8 00:09:33.662 Current LBA Format: LBA Format #04 00:09:33.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.662 00:09:33.662 Namespace ID:3 00:09:33.662 Error Recovery Timeout: Unlimited 00:09:33.662 Command Set Identifier: NVM (00h) 00:09:33.662 Deallocate: Supported 00:09:33.662 Deallocated/Unwritten Error: Supported 00:09:33.662 Deallocated Read Value: All 0x00 00:09:33.662 Deallocate in Write Zeroes: Not Supported 00:09:33.662 Deallocated Guard Field: 0xFFFF 00:09:33.662 Flush: Supported 00:09:33.662 Reservation: Not Supported 00:09:33.662 Namespace Sharing Capabilities: Private 00:09:33.662 Size (in LBAs): 1048576 (4GiB) 00:09:33.662 Capacity (in LBAs): 1048576 (4GiB) 00:09:33.662 Utilization (in LBAs): 1048576 (4GiB) 00:09:33.662 Thin Provisioning: Not Supported 00:09:33.662 Per-NS Atomic Units: No 00:09:33.662 Maximum Single Source Range Length: 128 00:09:33.662 Maximum Copy Length: 128 00:09:33.662 Maximum Source Range Count: 128 00:09:33.662 NGUID/EUI64 Never Reused: No 00:09:33.662 Namespace Write Protected: No 00:09:33.662 Number of LBA Formats: 8 00:09:33.662 Current LBA Format: LBA Format #04 00:09:33.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.662 00:09:33.662 20:02:41 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:33.662 20:02:41 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:33.921 ===================================================== 00:09:33.921 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:33.921 ===================================================== 00:09:33.921 Controller Capabilities/Features 00:09:33.921 ================================ 00:09:33.921 Vendor ID: 1b36 00:09:33.921 Subsystem Vendor ID: 1af4 00:09:33.921 Serial Number: 12340 00:09:33.921 Model Number: QEMU NVMe Ctrl 00:09:33.921 Firmware Version: 8.0.0 00:09:33.921 Recommended Arb Burst: 6 00:09:33.921 IEEE OUI Identifier: 00 54 52 00:09:33.921 Multi-path I/O 00:09:33.921 May have multiple subsystem ports: No 00:09:33.921 May have multiple controllers: No 00:09:33.921 Associated with SR-IOV VF: No 00:09:33.921 Max Data Transfer Size: 524288 00:09:33.921 Max Number of Namespaces: 256 00:09:33.921 Max Number of I/O Queues: 64 00:09:33.921 NVMe Specification Version (VS): 1.4 00:09:33.921 NVMe Specification Version (Identify): 1.4 00:09:33.921 Maximum Queue Entries: 2048 00:09:33.921 Contiguous Queues Required: Yes 00:09:33.921 Arbitration Mechanisms Supported 00:09:33.921 Weighted Round Robin: Not Supported 00:09:33.921 Vendor Specific: Not Supported 00:09:33.921 Reset Timeout: 7500 ms 00:09:33.921 Doorbell Stride: 4 bytes 00:09:33.921 NVM Subsystem Reset: Not Supported 00:09:33.921 Command Sets Supported 00:09:33.921 NVM Command Set: Supported 00:09:33.921 Boot Partition: Not Supported 00:09:33.921 Memory Page Size Minimum: 4096 bytes 00:09:33.921 Memory Page Size Maximum: 65536 bytes 00:09:33.921 Persistent Memory Region: Not Supported 00:09:33.921 Optional Asynchronous Events Supported 00:09:33.921 Namespace Attribute Notices: Supported 00:09:33.921 Firmware Activation Notices: Not Supported 00:09:33.921 ANA Change Notices: Not Supported 00:09:33.921 PLE Aggregate Log Change Notices: Not Supported 00:09:33.921 LBA Status Info Alert Notices: Not Supported 00:09:33.921 EGE Aggregate Log Change Notices: Not Supported 00:09:33.921 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.921 Zone Descriptor Change Notices: Not Supported 00:09:33.921 Discovery Log Change Notices: Not Supported 00:09:33.921 Controller Attributes 00:09:33.921 128-bit Host Identifier: Not Supported 00:09:33.921 Non-Operational Permissive Mode: Not Supported 00:09:33.921 NVM Sets: Not Supported 00:09:33.921 Read Recovery Levels: Not Supported 00:09:33.921 Endurance Groups: Not Supported 00:09:33.921 Predictable Latency Mode: Not Supported 00:09:33.921 Traffic Based Keep ALive: Not Supported 00:09:33.921 Namespace Granularity: Not Supported 00:09:33.921 SQ Associations: Not Supported 00:09:33.921 UUID List: Not Supported 00:09:33.921 Multi-Domain Subsystem: Not Supported 00:09:33.921 Fixed Capacity Management: Not Supported 00:09:33.921 Variable Capacity Management: Not Supported 00:09:33.921 Delete Endurance Group: Not Supported 00:09:33.921 Delete NVM Set: Not Supported 00:09:33.921 Extended LBA Formats Supported: Supported 00:09:33.921 Flexible Data Placement Supported: Not Supported 00:09:33.921 00:09:33.921 Controller Memory Buffer Support 00:09:33.921 ================================ 00:09:33.921 Supported: No 00:09:33.921 00:09:33.921 Persistent Memory Region Support 00:09:33.921 ================================ 00:09:33.921 Supported: No 00:09:33.921 00:09:33.921 Admin Command Set Attributes 00:09:33.921 ============================ 00:09:33.921 Security Send/Receive: Not Supported 00:09:33.921 Format NVM: Supported 00:09:33.921 Firmware Activate/Download: Not Supported 00:09:33.921 Namespace Management: Supported 00:09:33.921 Device Self-Test: Not Supported 00:09:33.921 Directives: Supported 00:09:33.921 NVMe-MI: Not Supported 00:09:33.921 Virtualization Management: Not Supported 00:09:33.921 Doorbell Buffer Config: Supported 00:09:33.921 Get LBA Status Capability: Not Supported 00:09:33.921 Command & Feature Lockdown Capability: Not Supported 00:09:33.921 Abort Command Limit: 4 00:09:33.921 Async Event Request Limit: 4 00:09:33.921 Number of Firmware Slots: N/A 00:09:33.921 Firmware Slot 1 Read-Only: N/A 00:09:33.921 Firmware Activation Without Reset: N/A 00:09:33.921 Multiple Update Detection Support: N/A 00:09:33.921 Firmware Update Granularity: No Information Provided 00:09:33.921 Per-Namespace SMART Log: Yes 00:09:33.921 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.921 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:33.921 Command Effects Log Page: Supported 00:09:33.921 Get Log Page Extended Data: Supported 00:09:33.922 Telemetry Log Pages: Not Supported 00:09:33.922 Persistent Event Log Pages: Not Supported 00:09:33.922 Supported Log Pages Log Page: May Support 00:09:33.922 Commands Supported & Effects Log Page: Not Supported 00:09:33.922 Feature Identifiers & Effects Log Page:May Support 00:09:33.922 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.922 Data Area 4 for Telemetry Log: Not Supported 00:09:33.922 Error Log Page Entries Supported: 1 00:09:33.922 Keep Alive: Not Supported 00:09:33.922 00:09:33.922 NVM Command Set Attributes 00:09:33.922 ========================== 00:09:33.922 Submission Queue Entry Size 00:09:33.922 Max: 64 00:09:33.922 Min: 64 00:09:33.922 Completion Queue Entry Size 00:09:33.922 Max: 16 00:09:33.922 Min: 16 00:09:33.922 Number of Namespaces: 256 00:09:33.922 Compare Command: Supported 00:09:33.922 Write Uncorrectable Command: Not Supported 00:09:33.922 Dataset Management Command: Supported 00:09:33.922 Write Zeroes Command: Supported 00:09:33.922 Set Features Save Field: Supported 00:09:33.922 Reservations: Not Supported 00:09:33.922 Timestamp: Supported 00:09:33.922 Copy: Supported 00:09:33.922 Volatile Write Cache: Present 00:09:33.922 Atomic Write Unit (Normal): 1 00:09:33.922 Atomic Write Unit (PFail): 1 00:09:33.922 Atomic Compare & Write Unit: 1 00:09:33.922 Fused Compare & Write: Not Supported 00:09:33.922 Scatter-Gather List 00:09:33.922 SGL Command Set: Supported 00:09:33.922 SGL Keyed: Not Supported 00:09:33.922 SGL Bit Bucket Descriptor: Not Supported 00:09:33.922 SGL Metadata Pointer: Not Supported 00:09:33.922 Oversized SGL: Not Supported 00:09:33.922 SGL Metadata Address: Not Supported 00:09:33.922 SGL Offset: Not Supported 00:09:33.922 Transport SGL Data Block: Not Supported 00:09:33.922 Replay Protected Memory Block: Not Supported 00:09:33.922 00:09:33.922 Firmware Slot Information 00:09:33.922 ========================= 00:09:33.922 Active slot: 1 00:09:33.922 Slot 1 Firmware Revision: 1.0 00:09:33.922 00:09:33.922 00:09:33.922 Commands Supported and Effects 00:09:33.922 ============================== 00:09:33.922 Admin Commands 00:09:33.922 -------------- 00:09:33.922 Delete I/O Submission Queue (00h): Supported 00:09:33.922 Create I/O Submission Queue (01h): Supported 00:09:33.922 Get Log Page (02h): Supported 00:09:33.922 Delete I/O Completion Queue (04h): Supported 00:09:33.922 Create I/O Completion Queue (05h): Supported 00:09:33.922 Identify (06h): Supported 00:09:33.922 Abort (08h): Supported 00:09:33.922 Set Features (09h): Supported 00:09:33.922 Get Features (0Ah): Supported 00:09:33.922 Asynchronous Event Request (0Ch): Supported 00:09:33.922 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.922 Directive Send (19h): Supported 00:09:33.922 Directive Receive (1Ah): Supported 00:09:33.922 Virtualization Management (1Ch): Supported 00:09:33.922 Doorbell Buffer Config (7Ch): Supported 00:09:33.922 Format NVM (80h): Supported LBA-Change 00:09:33.922 I/O Commands 00:09:33.922 ------------ 00:09:33.922 Flush (00h): Supported LBA-Change 00:09:33.922 Write (01h): Supported LBA-Change 00:09:33.922 Read (02h): Supported 00:09:33.922 Compare (05h): Supported 00:09:33.922 Write Zeroes (08h): Supported LBA-Change 00:09:33.922 Dataset Management (09h): Supported LBA-Change 00:09:33.922 Unknown (0Ch): Supported 00:09:33.922 Unknown (12h): Supported 00:09:33.922 Copy (19h): Supported LBA-Change 00:09:33.922 Unknown (1Dh): Supported LBA-Change 00:09:33.922 00:09:33.922 Error Log 00:09:33.922 ========= 00:09:33.922 00:09:33.922 Arbitration 00:09:33.922 =========== 00:09:33.922 Arbitration Burst: no limit 00:09:33.922 00:09:33.922 Power Management 00:09:33.922 ================ 00:09:33.922 Number of Power States: 1 00:09:33.922 Current Power State: Power State #0 00:09:33.922 Power State #0: 00:09:33.922 Max Power: 25.00 W 00:09:33.922 Non-Operational State: Operational 00:09:33.922 Entry Latency: 16 microseconds 00:09:33.922 Exit Latency: 4 microseconds 00:09:33.922 Relative Read Throughput: 0 00:09:33.922 Relative Read Latency: 0 00:09:33.922 Relative Write Throughput: 0 00:09:33.922 Relative Write Latency: 0 00:09:33.922 Idle Power: Not Reported 00:09:33.922 Active Power: Not Reported 00:09:33.922 Non-Operational Permissive Mode: Not Supported 00:09:33.922 00:09:33.922 Health Information 00:09:33.922 ================== 00:09:33.922 Critical Warnings: 00:09:33.922 Available Spare Space: OK 00:09:33.922 Temperature: OK 00:09:33.922 Device Reliability: OK 00:09:33.922 Read Only: No 00:09:33.922 Volatile Memory Backup: OK 00:09:33.922 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.922 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.922 Available Spare: 0% 00:09:33.922 Available Spare Threshold: 0% 00:09:33.922 Life Percentage Used: 0% 00:09:33.922 Data Units Read: 1869 00:09:33.922 Data Units Written: 867 00:09:33.922 Host Read Commands: 91846 00:09:33.922 Host Write Commands: 45719 00:09:33.922 Controller Busy Time: 0 minutes 00:09:33.922 Power Cycles: 0 00:09:33.922 Power On Hours: 0 hours 00:09:33.922 Unsafe Shutdowns: 0 00:09:33.922 Unrecoverable Media Errors: 0 00:09:33.922 Lifetime Error Log Entries: 0 00:09:33.922 Warning Temperature Time: 0 minutes 00:09:33.922 Critical Temperature Time: 0 minutes 00:09:33.922 00:09:33.922 Number of Queues 00:09:33.922 ================ 00:09:33.922 Number of I/O Submission Queues: 64 00:09:33.922 Number of I/O Completion Queues: 64 00:09:33.922 00:09:33.922 ZNS Specific Controller Data 00:09:33.922 ============================ 00:09:33.922 Zone Append Size Limit: 0 00:09:33.922 00:09:33.922 00:09:33.922 Active Namespaces 00:09:33.922 ================= 00:09:33.922 Namespace ID:1 00:09:33.922 Error Recovery Timeout: Unlimited 00:09:33.922 Command Set Identifier: NVM (00h) 00:09:33.922 Deallocate: Supported 00:09:33.922 Deallocated/Unwritten Error: Supported 00:09:33.922 Deallocated Read Value: All 0x00 00:09:33.922 Deallocate in Write Zeroes: Not Supported 00:09:33.922 Deallocated Guard Field: 0xFFFF 00:09:33.922 Flush: Supported 00:09:33.922 Reservation: Not Supported 00:09:33.922 Metadata Transferred as: Separate Metadata Buffer 00:09:33.922 Namespace Sharing Capabilities: Private 00:09:33.922 Size (in LBAs): 1548666 (5GiB) 00:09:33.922 Capacity (in LBAs): 1548666 (5GiB) 00:09:33.922 Utilization (in LBAs): 1548666 (5GiB) 00:09:33.922 Thin Provisioning: Not Supported 00:09:33.922 Per-NS Atomic Units: No 00:09:33.922 Maximum Single Source Range Length: 128 00:09:33.922 Maximum Copy Length: 128 00:09:33.922 Maximum Source Range Count: 128 00:09:33.922 NGUID/EUI64 Never Reused: No 00:09:33.922 Namespace Write Protected: No 00:09:33.922 Number of LBA Formats: 8 00:09:33.922 Current LBA Format: LBA Format #07 00:09:33.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.922 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.922 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.922 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.922 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.922 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.922 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.922 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.922 00:09:33.922 20:02:41 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:33.922 20:02:41 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:34.182 ===================================================== 00:09:34.182 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:34.182 ===================================================== 00:09:34.182 Controller Capabilities/Features 00:09:34.182 ================================ 00:09:34.182 Vendor ID: 1b36 00:09:34.182 Subsystem Vendor ID: 1af4 00:09:34.182 Serial Number: 12341 00:09:34.182 Model Number: QEMU NVMe Ctrl 00:09:34.182 Firmware Version: 8.0.0 00:09:34.182 Recommended Arb Burst: 6 00:09:34.182 IEEE OUI Identifier: 00 54 52 00:09:34.182 Multi-path I/O 00:09:34.182 May have multiple subsystem ports: No 00:09:34.182 May have multiple controllers: No 00:09:34.182 Associated with SR-IOV VF: No 00:09:34.182 Max Data Transfer Size: 524288 00:09:34.182 Max Number of Namespaces: 256 00:09:34.182 Max Number of I/O Queues: 64 00:09:34.182 NVMe Specification Version (VS): 1.4 00:09:34.182 NVMe Specification Version (Identify): 1.4 00:09:34.182 Maximum Queue Entries: 2048 00:09:34.182 Contiguous Queues Required: Yes 00:09:34.182 Arbitration Mechanisms Supported 00:09:34.182 Weighted Round Robin: Not Supported 00:09:34.182 Vendor Specific: Not Supported 00:09:34.182 Reset Timeout: 7500 ms 00:09:34.182 Doorbell Stride: 4 bytes 00:09:34.182 NVM Subsystem Reset: Not Supported 00:09:34.182 Command Sets Supported 00:09:34.182 NVM Command Set: Supported 00:09:34.182 Boot Partition: Not Supported 00:09:34.182 Memory Page Size Minimum: 4096 bytes 00:09:34.182 Memory Page Size Maximum: 65536 bytes 00:09:34.182 Persistent Memory Region: Not Supported 00:09:34.182 Optional Asynchronous Events Supported 00:09:34.182 Namespace Attribute Notices: Supported 00:09:34.182 Firmware Activation Notices: Not Supported 00:09:34.182 ANA Change Notices: Not Supported 00:09:34.182 PLE Aggregate Log Change Notices: Not Supported 00:09:34.182 LBA Status Info Alert Notices: Not Supported 00:09:34.182 EGE Aggregate Log Change Notices: Not Supported 00:09:34.182 Normal NVM Subsystem Shutdown event: Not Supported 00:09:34.182 Zone Descriptor Change Notices: Not Supported 00:09:34.182 Discovery Log Change Notices: Not Supported 00:09:34.182 Controller Attributes 00:09:34.182 128-bit Host Identifier: Not Supported 00:09:34.182 Non-Operational Permissive Mode: Not Supported 00:09:34.182 NVM Sets: Not Supported 00:09:34.182 Read Recovery Levels: Not Supported 00:09:34.182 Endurance Groups: Not Supported 00:09:34.182 Predictable Latency Mode: Not Supported 00:09:34.182 Traffic Based Keep ALive: Not Supported 00:09:34.182 Namespace Granularity: Not Supported 00:09:34.182 SQ Associations: Not Supported 00:09:34.182 UUID List: Not Supported 00:09:34.182 Multi-Domain Subsystem: Not Supported 00:09:34.182 Fixed Capacity Management: Not Supported 00:09:34.182 Variable Capacity Management: Not Supported 00:09:34.182 Delete Endurance Group: Not Supported 00:09:34.182 Delete NVM Set: Not Supported 00:09:34.182 Extended LBA Formats Supported: Supported 00:09:34.182 Flexible Data Placement Supported: Not Supported 00:09:34.182 00:09:34.182 Controller Memory Buffer Support 00:09:34.182 ================================ 00:09:34.182 Supported: No 00:09:34.182 00:09:34.182 Persistent Memory Region Support 00:09:34.182 ================================ 00:09:34.182 Supported: No 00:09:34.182 00:09:34.182 Admin Command Set Attributes 00:09:34.182 ============================ 00:09:34.182 Security Send/Receive: Not Supported 00:09:34.182 Format NVM: Supported 00:09:34.182 Firmware Activate/Download: Not Supported 00:09:34.182 Namespace Management: Supported 00:09:34.182 Device Self-Test: Not Supported 00:09:34.182 Directives: Supported 00:09:34.182 NVMe-MI: Not Supported 00:09:34.182 Virtualization Management: Not Supported 00:09:34.182 Doorbell Buffer Config: Supported 00:09:34.182 Get LBA Status Capability: Not Supported 00:09:34.182 Command & Feature Lockdown Capability: Not Supported 00:09:34.182 Abort Command Limit: 4 00:09:34.182 Async Event Request Limit: 4 00:09:34.182 Number of Firmware Slots: N/A 00:09:34.182 Firmware Slot 1 Read-Only: N/A 00:09:34.182 Firmware Activation Without Reset: N/A 00:09:34.182 Multiple Update Detection Support: N/A 00:09:34.182 Firmware Update Granularity: No Information Provided 00:09:34.182 Per-Namespace SMART Log: Yes 00:09:34.182 Asymmetric Namespace Access Log Page: Not Supported 00:09:34.182 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:34.182 Command Effects Log Page: Supported 00:09:34.182 Get Log Page Extended Data: Supported 00:09:34.182 Telemetry Log Pages: Not Supported 00:09:34.182 Persistent Event Log Pages: Not Supported 00:09:34.182 Supported Log Pages Log Page: May Support 00:09:34.182 Commands Supported & Effects Log Page: Not Supported 00:09:34.182 Feature Identifiers & Effects Log Page:May Support 00:09:34.182 NVMe-MI Commands & Effects Log Page: May Support 00:09:34.182 Data Area 4 for Telemetry Log: Not Supported 00:09:34.182 Error Log Page Entries Supported: 1 00:09:34.182 Keep Alive: Not Supported 00:09:34.182 00:09:34.182 NVM Command Set Attributes 00:09:34.182 ========================== 00:09:34.182 Submission Queue Entry Size 00:09:34.182 Max: 64 00:09:34.182 Min: 64 00:09:34.182 Completion Queue Entry Size 00:09:34.182 Max: 16 00:09:34.182 Min: 16 00:09:34.182 Number of Namespaces: 256 00:09:34.182 Compare Command: Supported 00:09:34.182 Write Uncorrectable Command: Not Supported 00:09:34.182 Dataset Management Command: Supported 00:09:34.182 Write Zeroes Command: Supported 00:09:34.182 Set Features Save Field: Supported 00:09:34.182 Reservations: Not Supported 00:09:34.182 Timestamp: Supported 00:09:34.182 Copy: Supported 00:09:34.182 Volatile Write Cache: Present 00:09:34.182 Atomic Write Unit (Normal): 1 00:09:34.182 Atomic Write Unit (PFail): 1 00:09:34.182 Atomic Compare & Write Unit: 1 00:09:34.182 Fused Compare & Write: Not Supported 00:09:34.182 Scatter-Gather List 00:09:34.182 SGL Command Set: Supported 00:09:34.182 SGL Keyed: Not Supported 00:09:34.182 SGL Bit Bucket Descriptor: Not Supported 00:09:34.182 SGL Metadata Pointer: Not Supported 00:09:34.182 Oversized SGL: Not Supported 00:09:34.182 SGL Metadata Address: Not Supported 00:09:34.182 SGL Offset: Not Supported 00:09:34.182 Transport SGL Data Block: Not Supported 00:09:34.182 Replay Protected Memory Block: Not Supported 00:09:34.182 00:09:34.182 Firmware Slot Information 00:09:34.182 ========================= 00:09:34.182 Active slot: 1 00:09:34.182 Slot 1 Firmware Revision: 1.0 00:09:34.182 00:09:34.182 00:09:34.182 Commands Supported and Effects 00:09:34.182 ============================== 00:09:34.182 Admin Commands 00:09:34.182 -------------- 00:09:34.182 Delete I/O Submission Queue (00h): Supported 00:09:34.182 Create I/O Submission Queue (01h): Supported 00:09:34.182 Get Log Page (02h): Supported 00:09:34.182 Delete I/O Completion Queue (04h): Supported 00:09:34.182 Create I/O Completion Queue (05h): Supported 00:09:34.182 Identify (06h): Supported 00:09:34.182 Abort (08h): Supported 00:09:34.182 Set Features (09h): Supported 00:09:34.182 Get Features (0Ah): Supported 00:09:34.182 Asynchronous Event Request (0Ch): Supported 00:09:34.182 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:34.182 Directive Send (19h): Supported 00:09:34.182 Directive Receive (1Ah): Supported 00:09:34.182 Virtualization Management (1Ch): Supported 00:09:34.182 Doorbell Buffer Config (7Ch): Supported 00:09:34.182 Format NVM (80h): Supported LBA-Change 00:09:34.182 I/O Commands 00:09:34.182 ------------ 00:09:34.182 Flush (00h): Supported LBA-Change 00:09:34.182 Write (01h): Supported LBA-Change 00:09:34.182 Read (02h): Supported 00:09:34.182 Compare (05h): Supported 00:09:34.182 Write Zeroes (08h): Supported LBA-Change 00:09:34.182 Dataset Management (09h): Supported LBA-Change 00:09:34.182 Unknown (0Ch): Supported 00:09:34.182 Unknown (12h): Supported 00:09:34.183 Copy (19h): Supported LBA-Change 00:09:34.183 Unknown (1Dh): Supported LBA-Change 00:09:34.183 00:09:34.183 Error Log 00:09:34.183 ========= 00:09:34.183 00:09:34.183 Arbitration 00:09:34.183 =========== 00:09:34.183 Arbitration Burst: no limit 00:09:34.183 00:09:34.183 Power Management 00:09:34.183 ================ 00:09:34.183 Number of Power States: 1 00:09:34.183 Current Power State: Power State #0 00:09:34.183 Power State #0: 00:09:34.183 Max Power: 25.00 W 00:09:34.183 Non-Operational State: Operational 00:09:34.183 Entry Latency: 16 microseconds 00:09:34.183 Exit Latency: 4 microseconds 00:09:34.183 Relative Read Throughput: 0 00:09:34.183 Relative Read Latency: 0 00:09:34.183 Relative Write Throughput: 0 00:09:34.183 Relative Write Latency: 0 00:09:34.183 Idle Power: Not Reported 00:09:34.183 Active Power: Not Reported 00:09:34.183 Non-Operational Permissive Mode: Not Supported 00:09:34.183 00:09:34.183 Health Information 00:09:34.183 ================== 00:09:34.183 Critical Warnings: 00:09:34.183 Available Spare Space: OK 00:09:34.183 Temperature: OK 00:09:34.183 Device Reliability: OK 00:09:34.183 Read Only: No 00:09:34.183 Volatile Memory Backup: OK 00:09:34.183 Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.183 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:34.183 Available Spare: 0% 00:09:34.183 Available Spare Threshold: 0% 00:09:34.183 Life Percentage Used: 0% 00:09:34.183 Data Units Read: 1266 00:09:34.183 Data Units Written: 589 00:09:34.183 Host Read Commands: 60861 00:09:34.183 Host Write Commands: 29969 00:09:34.183 Controller Busy Time: 0 minutes 00:09:34.183 Power Cycles: 0 00:09:34.183 Power On Hours: 0 hours 00:09:34.183 Unsafe Shutdowns: 0 00:09:34.183 Unrecoverable Media Errors: 0 00:09:34.183 Lifetime Error Log Entries: 0 00:09:34.183 Warning Temperature Time: 0 minutes 00:09:34.183 Critical Temperature Time: 0 minutes 00:09:34.183 00:09:34.183 Number of Queues 00:09:34.183 ================ 00:09:34.183 Number of I/O Submission Queues: 64 00:09:34.183 Number of I/O Completion Queues: 64 00:09:34.183 00:09:34.183 ZNS Specific Controller Data 00:09:34.183 ============================ 00:09:34.183 Zone Append Size Limit: 0 00:09:34.183 00:09:34.183 00:09:34.183 Active Namespaces 00:09:34.183 ================= 00:09:34.183 Namespace ID:1 00:09:34.183 Error Recovery Timeout: Unlimited 00:09:34.183 Command Set Identifier: NVM (00h) 00:09:34.183 Deallocate: Supported 00:09:34.183 Deallocated/Unwritten Error: Supported 00:09:34.183 Deallocated Read Value: All 0x00 00:09:34.183 Deallocate in Write Zeroes: Not Supported 00:09:34.183 Deallocated Guard Field: 0xFFFF 00:09:34.183 Flush: Supported 00:09:34.183 Reservation: Not Supported 00:09:34.183 Namespace Sharing Capabilities: Private 00:09:34.183 Size (in LBAs): 1310720 (5GiB) 00:09:34.183 Capacity (in LBAs): 1310720 (5GiB) 00:09:34.183 Utilization (in LBAs): 1310720 (5GiB) 00:09:34.183 Thin Provisioning: Not Supported 00:09:34.183 Per-NS Atomic Units: No 00:09:34.183 Maximum Single Source Range Length: 128 00:09:34.183 Maximum Copy Length: 128 00:09:34.183 Maximum Source Range Count: 128 00:09:34.183 NGUID/EUI64 Never Reused: No 00:09:34.183 Namespace Write Protected: No 00:09:34.183 Number of LBA Formats: 8 00:09:34.183 Current LBA Format: LBA Format #04 00:09:34.183 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:34.183 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:34.183 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:34.183 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:34.183 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:34.183 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:34.183 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:34.183 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:34.183 00:09:34.183 20:02:41 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:34.183 20:02:41 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:34.442 ===================================================== 00:09:34.442 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:34.442 ===================================================== 00:09:34.442 Controller Capabilities/Features 00:09:34.442 ================================ 00:09:34.442 Vendor ID: 1b36 00:09:34.442 Subsystem Vendor ID: 1af4 00:09:34.442 Serial Number: 12342 00:09:34.442 Model Number: QEMU NVMe Ctrl 00:09:34.442 Firmware Version: 8.0.0 00:09:34.442 Recommended Arb Burst: 6 00:09:34.442 IEEE OUI Identifier: 00 54 52 00:09:34.442 Multi-path I/O 00:09:34.442 May have multiple subsystem ports: No 00:09:34.442 May have multiple controllers: No 00:09:34.442 Associated with SR-IOV VF: No 00:09:34.442 Max Data Transfer Size: 524288 00:09:34.442 Max Number of Namespaces: 256 00:09:34.442 Max Number of I/O Queues: 64 00:09:34.442 NVMe Specification Version (VS): 1.4 00:09:34.442 NVMe Specification Version (Identify): 1.4 00:09:34.442 Maximum Queue Entries: 2048 00:09:34.442 Contiguous Queues Required: Yes 00:09:34.442 Arbitration Mechanisms Supported 00:09:34.442 Weighted Round Robin: Not Supported 00:09:34.442 Vendor Specific: Not Supported 00:09:34.442 Reset Timeout: 7500 ms 00:09:34.442 Doorbell Stride: 4 bytes 00:09:34.442 NVM Subsystem Reset: Not Supported 00:09:34.442 Command Sets Supported 00:09:34.442 NVM Command Set: Supported 00:09:34.442 Boot Partition: Not Supported 00:09:34.442 Memory Page Size Minimum: 4096 bytes 00:09:34.442 Memory Page Size Maximum: 65536 bytes 00:09:34.442 Persistent Memory Region: Not Supported 00:09:34.442 Optional Asynchronous Events Supported 00:09:34.442 Namespace Attribute Notices: Supported 00:09:34.442 Firmware Activation Notices: Not Supported 00:09:34.442 ANA Change Notices: Not Supported 00:09:34.442 PLE Aggregate Log Change Notices: Not Supported 00:09:34.442 LBA Status Info Alert Notices: Not Supported 00:09:34.442 EGE Aggregate Log Change Notices: Not Supported 00:09:34.442 Normal NVM Subsystem Shutdown event: Not Supported 00:09:34.442 Zone Descriptor Change Notices: Not Supported 00:09:34.442 Discovery Log Change Notices: Not Supported 00:09:34.442 Controller Attributes 00:09:34.442 128-bit Host Identifier: Not Supported 00:09:34.442 Non-Operational Permissive Mode: Not Supported 00:09:34.442 NVM Sets: Not Supported 00:09:34.442 Read Recovery Levels: Not Supported 00:09:34.442 Endurance Groups: Not Supported 00:09:34.442 Predictable Latency Mode: Not Supported 00:09:34.442 Traffic Based Keep ALive: Not Supported 00:09:34.442 Namespace Granularity: Not Supported 00:09:34.442 SQ Associations: Not Supported 00:09:34.442 UUID List: Not Supported 00:09:34.442 Multi-Domain Subsystem: Not Supported 00:09:34.442 Fixed Capacity Management: Not Supported 00:09:34.442 Variable Capacity Management: Not Supported 00:09:34.442 Delete Endurance Group: Not Supported 00:09:34.443 Delete NVM Set: Not Supported 00:09:34.443 Extended LBA Formats Supported: Supported 00:09:34.443 Flexible Data Placement Supported: Not Supported 00:09:34.443 00:09:34.443 Controller Memory Buffer Support 00:09:34.443 ================================ 00:09:34.443 Supported: No 00:09:34.443 00:09:34.443 Persistent Memory Region Support 00:09:34.443 ================================ 00:09:34.443 Supported: No 00:09:34.443 00:09:34.443 Admin Command Set Attributes 00:09:34.443 ============================ 00:09:34.443 Security Send/Receive: Not Supported 00:09:34.443 Format NVM: Supported 00:09:34.443 Firmware Activate/Download: Not Supported 00:09:34.443 Namespace Management: Supported 00:09:34.443 Device Self-Test: Not Supported 00:09:34.443 Directives: Supported 00:09:34.443 NVMe-MI: Not Supported 00:09:34.443 Virtualization Management: Not Supported 00:09:34.443 Doorbell Buffer Config: Supported 00:09:34.443 Get LBA Status Capability: Not Supported 00:09:34.443 Command & Feature Lockdown Capability: Not Supported 00:09:34.443 Abort Command Limit: 4 00:09:34.443 Async Event Request Limit: 4 00:09:34.443 Number of Firmware Slots: N/A 00:09:34.443 Firmware Slot 1 Read-Only: N/A 00:09:34.443 Firmware Activation Without Reset: N/A 00:09:34.443 Multiple Update Detection Support: N/A 00:09:34.443 Firmware Update Granularity: No Information Provided 00:09:34.443 Per-Namespace SMART Log: Yes 00:09:34.443 Asymmetric Namespace Access Log Page: Not Supported 00:09:34.443 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:34.443 Command Effects Log Page: Supported 00:09:34.443 Get Log Page Extended Data: Supported 00:09:34.443 Telemetry Log Pages: Not Supported 00:09:34.443 Persistent Event Log Pages: Not Supported 00:09:34.443 Supported Log Pages Log Page: May Support 00:09:34.443 Commands Supported & Effects Log Page: Not Supported 00:09:34.443 Feature Identifiers & Effects Log Page:May Support 00:09:34.443 NVMe-MI Commands & Effects Log Page: May Support 00:09:34.443 Data Area 4 for Telemetry Log: Not Supported 00:09:34.443 Error Log Page Entries Supported: 1 00:09:34.443 Keep Alive: Not Supported 00:09:34.443 00:09:34.443 NVM Command Set Attributes 00:09:34.443 ========================== 00:09:34.443 Submission Queue Entry Size 00:09:34.443 Max: 64 00:09:34.443 Min: 64 00:09:34.443 Completion Queue Entry Size 00:09:34.443 Max: 16 00:09:34.443 Min: 16 00:09:34.443 Number of Namespaces: 256 00:09:34.443 Compare Command: Supported 00:09:34.443 Write Uncorrectable Command: Not Supported 00:09:34.443 Dataset Management Command: Supported 00:09:34.443 Write Zeroes Command: Supported 00:09:34.443 Set Features Save Field: Supported 00:09:34.443 Reservations: Not Supported 00:09:34.443 Timestamp: Supported 00:09:34.443 Copy: Supported 00:09:34.443 Volatile Write Cache: Present 00:09:34.443 Atomic Write Unit (Normal): 1 00:09:34.443 Atomic Write Unit (PFail): 1 00:09:34.443 Atomic Compare & Write Unit: 1 00:09:34.443 Fused Compare & Write: Not Supported 00:09:34.443 Scatter-Gather List 00:09:34.443 SGL Command Set: Supported 00:09:34.443 SGL Keyed: Not Supported 00:09:34.443 SGL Bit Bucket Descriptor: Not Supported 00:09:34.443 SGL Metadata Pointer: Not Supported 00:09:34.443 Oversized SGL: Not Supported 00:09:34.443 SGL Metadata Address: Not Supported 00:09:34.443 SGL Offset: Not Supported 00:09:34.443 Transport SGL Data Block: Not Supported 00:09:34.443 Replay Protected Memory Block: Not Supported 00:09:34.443 00:09:34.443 Firmware Slot Information 00:09:34.443 ========================= 00:09:34.443 Active slot: 1 00:09:34.443 Slot 1 Firmware Revision: 1.0 00:09:34.443 00:09:34.443 00:09:34.443 Commands Supported and Effects 00:09:34.443 ============================== 00:09:34.443 Admin Commands 00:09:34.443 -------------- 00:09:34.443 Delete I/O Submission Queue (00h): Supported 00:09:34.443 Create I/O Submission Queue (01h): Supported 00:09:34.443 Get Log Page (02h): Supported 00:09:34.443 Delete I/O Completion Queue (04h): Supported 00:09:34.443 Create I/O Completion Queue (05h): Supported 00:09:34.443 Identify (06h): Supported 00:09:34.443 Abort (08h): Supported 00:09:34.443 Set Features (09h): Supported 00:09:34.443 Get Features (0Ah): Supported 00:09:34.443 Asynchronous Event Request (0Ch): Supported 00:09:34.443 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:34.443 Directive Send (19h): Supported 00:09:34.443 Directive Receive (1Ah): Supported 00:09:34.443 Virtualization Management (1Ch): Supported 00:09:34.443 Doorbell Buffer Config (7Ch): Supported 00:09:34.443 Format NVM (80h): Supported LBA-Change 00:09:34.443 I/O Commands 00:09:34.443 ------------ 00:09:34.443 Flush (00h): Supported LBA-Change 00:09:34.443 Write (01h): Supported LBA-Change 00:09:34.443 Read (02h): Supported 00:09:34.443 Compare (05h): Supported 00:09:34.443 Write Zeroes (08h): Supported LBA-Change 00:09:34.443 Dataset Management (09h): Supported LBA-Change 00:09:34.443 Unknown (0Ch): Supported 00:09:34.443 Unknown (12h): Supported 00:09:34.443 Copy (19h): Supported LBA-Change 00:09:34.443 Unknown (1Dh): Supported LBA-Change 00:09:34.443 00:09:34.443 Error Log 00:09:34.443 ========= 00:09:34.443 00:09:34.443 Arbitration 00:09:34.443 =========== 00:09:34.443 Arbitration Burst: no limit 00:09:34.443 00:09:34.443 Power Management 00:09:34.443 ================ 00:09:34.443 Number of Power States: 1 00:09:34.443 Current Power State: Power State #0 00:09:34.443 Power State #0: 00:09:34.443 Max Power: 25.00 W 00:09:34.443 Non-Operational State: Operational 00:09:34.443 Entry Latency: 16 microseconds 00:09:34.443 Exit Latency: 4 microseconds 00:09:34.443 Relative Read Throughput: 0 00:09:34.443 Relative Read Latency: 0 00:09:34.443 Relative Write Throughput: 0 00:09:34.443 Relative Write Latency: 0 00:09:34.443 Idle Power: Not Reported 00:09:34.443 Active Power: Not Reported 00:09:34.443 Non-Operational Permissive Mode: Not Supported 00:09:34.443 00:09:34.443 Health Information 00:09:34.443 ================== 00:09:34.443 Critical Warnings: 00:09:34.443 Available Spare Space: OK 00:09:34.443 Temperature: OK 00:09:34.443 Device Reliability: OK 00:09:34.443 Read Only: No 00:09:34.443 Volatile Memory Backup: OK 00:09:34.443 Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.443 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:34.443 Available Spare: 0% 00:09:34.443 Available Spare Threshold: 0% 00:09:34.443 Life Percentage Used: 0% 00:09:34.443 Data Units Read: 3924 00:09:34.443 Data Units Written: 1822 00:09:34.443 Host Read Commands: 184185 00:09:34.443 Host Write Commands: 90533 00:09:34.443 Controller Busy Time: 0 minutes 00:09:34.443 Power Cycles: 0 00:09:34.443 Power On Hours: 0 hours 00:09:34.443 Unsafe Shutdowns: 0 00:09:34.443 Unrecoverable Media Errors: 0 00:09:34.443 Lifetime Error Log Entries: 0 00:09:34.443 Warning Temperature Time: 0 minutes 00:09:34.443 Critical Temperature Time: 0 minutes 00:09:34.443 00:09:34.443 Number of Queues 00:09:34.443 ================ 00:09:34.443 Number of I/O Submission Queues: 64 00:09:34.443 Number of I/O Completion Queues: 64 00:09:34.443 00:09:34.443 ZNS Specific Controller Data 00:09:34.443 ============================ 00:09:34.443 Zone Append Size Limit: 0 00:09:34.443 00:09:34.443 00:09:34.443 Active Namespaces 00:09:34.443 ================= 00:09:34.443 Namespace ID:1 00:09:34.443 Error Recovery Timeout: Unlimited 00:09:34.443 Command Set Identifier: NVM (00h) 00:09:34.443 Deallocate: Supported 00:09:34.443 Deallocated/Unwritten Error: Supported 00:09:34.443 Deallocated Read Value: All 0x00 00:09:34.443 Deallocate in Write Zeroes: Not Supported 00:09:34.443 Deallocated Guard Field: 0xFFFF 00:09:34.443 Flush: Supported 00:09:34.443 Reservation: Not Supported 00:09:34.443 Namespace Sharing Capabilities: Private 00:09:34.443 Size (in LBAs): 1048576 (4GiB) 00:09:34.443 Capacity (in LBAs): 1048576 (4GiB) 00:09:34.443 Utilization (in LBAs): 1048576 (4GiB) 00:09:34.443 Thin Provisioning: Not Supported 00:09:34.443 Per-NS Atomic Units: No 00:09:34.443 Maximum Single Source Range Length: 128 00:09:34.443 Maximum Copy Length: 128 00:09:34.443 Maximum Source Range Count: 128 00:09:34.443 NGUID/EUI64 Never Reused: No 00:09:34.443 Namespace Write Protected: No 00:09:34.443 Number of LBA Formats: 8 00:09:34.443 Current LBA Format: LBA Format #04 00:09:34.443 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:34.443 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:34.443 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:34.443 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:34.443 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:34.443 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:34.443 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:34.443 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:34.443 00:09:34.443 Namespace ID:2 00:09:34.443 Error Recovery Timeout: Unlimited 00:09:34.443 Command Set Identifier: NVM (00h) 00:09:34.443 Deallocate: Supported 00:09:34.443 Deallocated/Unwritten Error: Supported 00:09:34.444 Deallocated Read Value: All 0x00 00:09:34.444 Deallocate in Write Zeroes: Not Supported 00:09:34.444 Deallocated Guard Field: 0xFFFF 00:09:34.444 Flush: Supported 00:09:34.444 Reservation: Not Supported 00:09:34.444 Namespace Sharing Capabilities: Private 00:09:34.444 Size (in LBAs): 1048576 (4GiB) 00:09:34.444 Capacity (in LBAs): 1048576 (4GiB) 00:09:34.444 Utilization (in LBAs): 1048576 (4GiB) 00:09:34.444 Thin Provisioning: Not Supported 00:09:34.444 Per-NS Atomic Units: No 00:09:34.444 Maximum Single Source Range Length: 128 00:09:34.444 Maximum Copy Length: 128 00:09:34.444 Maximum Source Range Count: 128 00:09:34.444 NGUID/EUI64 Never Reused: No 00:09:34.444 Namespace Write Protected: No 00:09:34.444 Number of LBA Formats: 8 00:09:34.444 Current LBA Format: LBA Format #04 00:09:34.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:34.444 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:34.444 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:34.444 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:34.444 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:34.444 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:34.444 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:34.444 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:34.444 00:09:34.444 Namespace ID:3 00:09:34.444 Error Recovery Timeout: Unlimited 00:09:34.444 Command Set Identifier: NVM (00h) 00:09:34.444 Deallocate: Supported 00:09:34.444 Deallocated/Unwritten Error: Supported 00:09:34.444 Deallocated Read Value: All 0x00 00:09:34.444 Deallocate in Write Zeroes: Not Supported 00:09:34.444 Deallocated Guard Field: 0xFFFF 00:09:34.444 Flush: Supported 00:09:34.444 Reservation: Not Supported 00:09:34.444 Namespace Sharing Capabilities: Private 00:09:34.444 Size (in LBAs): 1048576 (4GiB) 00:09:34.444 Capacity (in LBAs): 1048576 (4GiB) 00:09:34.444 Utilization (in LBAs): 1048576 (4GiB) 00:09:34.444 Thin Provisioning: Not Supported 00:09:34.444 Per-NS Atomic Units: No 00:09:34.444 Maximum Single Source Range Length: 128 00:09:34.444 Maximum Copy Length: 128 00:09:34.444 Maximum Source Range Count: 128 00:09:34.444 NGUID/EUI64 Never Reused: No 00:09:34.444 Namespace Write Protected: No 00:09:34.444 Number of LBA Formats: 8 00:09:34.444 Current LBA Format: LBA Format #04 00:09:34.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:34.444 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:34.444 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:34.444 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:34.444 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:34.444 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:34.444 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:34.444 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:34.444 00:09:34.444 20:02:41 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:34.444 20:02:41 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:34.703 ===================================================== 00:09:34.703 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:34.703 ===================================================== 00:09:34.703 Controller Capabilities/Features 00:09:34.703 ================================ 00:09:34.703 Vendor ID: 1b36 00:09:34.703 Subsystem Vendor ID: 1af4 00:09:34.703 Serial Number: 12343 00:09:34.703 Model Number: QEMU NVMe Ctrl 00:09:34.703 Firmware Version: 8.0.0 00:09:34.703 Recommended Arb Burst: 6 00:09:34.703 IEEE OUI Identifier: 00 54 52 00:09:34.703 Multi-path I/O 00:09:34.703 May have multiple subsystem ports: No 00:09:34.703 May have multiple controllers: Yes 00:09:34.703 Associated with SR-IOV VF: No 00:09:34.703 Max Data Transfer Size: 524288 00:09:34.703 Max Number of Namespaces: 256 00:09:34.703 Max Number of I/O Queues: 64 00:09:34.703 NVMe Specification Version (VS): 1.4 00:09:34.703 NVMe Specification Version (Identify): 1.4 00:09:34.703 Maximum Queue Entries: 2048 00:09:34.703 Contiguous Queues Required: Yes 00:09:34.703 Arbitration Mechanisms Supported 00:09:34.703 Weighted Round Robin: Not Supported 00:09:34.703 Vendor Specific: Not Supported 00:09:34.703 Reset Timeout: 7500 ms 00:09:34.703 Doorbell Stride: 4 bytes 00:09:34.703 NVM Subsystem Reset: Not Supported 00:09:34.703 Command Sets Supported 00:09:34.703 NVM Command Set: Supported 00:09:34.703 Boot Partition: Not Supported 00:09:34.703 Memory Page Size Minimum: 4096 bytes 00:09:34.703 Memory Page Size Maximum: 65536 bytes 00:09:34.703 Persistent Memory Region: Not Supported 00:09:34.703 Optional Asynchronous Events Supported 00:09:34.703 Namespace Attribute Notices: Supported 00:09:34.703 Firmware Activation Notices: Not Supported 00:09:34.703 ANA Change Notices: Not Supported 00:09:34.703 PLE Aggregate Log Change Notices: Not Supported 00:09:34.703 LBA Status Info Alert Notices: Not Supported 00:09:34.703 EGE Aggregate Log Change Notices: Not Supported 00:09:34.703 Normal NVM Subsystem Shutdown event: Not Supported 00:09:34.703 Zone Descriptor Change Notices: Not Supported 00:09:34.703 Discovery Log Change Notices: Not Supported 00:09:34.703 Controller Attributes 00:09:34.703 128-bit Host Identifier: Not Supported 00:09:34.703 Non-Operational Permissive Mode: Not Supported 00:09:34.703 NVM Sets: Not Supported 00:09:34.703 Read Recovery Levels: Not Supported 00:09:34.703 Endurance Groups: Supported 00:09:34.703 Predictable Latency Mode: Not Supported 00:09:34.703 Traffic Based Keep ALive: Not Supported 00:09:34.703 Namespace Granularity: Not Supported 00:09:34.703 SQ Associations: Not Supported 00:09:34.703 UUID List: Not Supported 00:09:34.703 Multi-Domain Subsystem: Not Supported 00:09:34.703 Fixed Capacity Management: Not Supported 00:09:34.703 Variable Capacity Management: Not Supported 00:09:34.703 Delete Endurance Group: Not Supported 00:09:34.703 Delete NVM Set: Not Supported 00:09:34.703 Extended LBA Formats Supported: Supported 00:09:34.703 Flexible Data Placement Supported: Supported 00:09:34.703 00:09:34.703 Controller Memory Buffer Support 00:09:34.703 ================================ 00:09:34.703 Supported: No 00:09:34.703 00:09:34.703 Persistent Memory Region Support 00:09:34.703 ================================ 00:09:34.703 Supported: No 00:09:34.703 00:09:34.703 Admin Command Set Attributes 00:09:34.703 ============================ 00:09:34.703 Security Send/Receive: Not Supported 00:09:34.703 Format NVM: Supported 00:09:34.703 Firmware Activate/Download: Not Supported 00:09:34.703 Namespace Management: Supported 00:09:34.703 Device Self-Test: Not Supported 00:09:34.703 Directives: Supported 00:09:34.704 NVMe-MI: Not Supported 00:09:34.704 Virtualization Management: Not Supported 00:09:34.704 Doorbell Buffer Config: Supported 00:09:34.704 Get LBA Status Capability: Not Supported 00:09:34.704 Command & Feature Lockdown Capability: Not Supported 00:09:34.704 Abort Command Limit: 4 00:09:34.704 Async Event Request Limit: 4 00:09:34.704 Number of Firmware Slots: N/A 00:09:34.704 Firmware Slot 1 Read-Only: N/A 00:09:34.704 Firmware Activation Without Reset: N/A 00:09:34.704 Multiple Update Detection Support: N/A 00:09:34.704 Firmware Update Granularity: No Information Provided 00:09:34.704 Per-Namespace SMART Log: Yes 00:09:34.704 Asymmetric Namespace Access Log Page: Not Supported 00:09:34.704 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:34.704 Command Effects Log Page: Supported 00:09:34.704 Get Log Page Extended Data: Supported 00:09:34.704 Telemetry Log Pages: Not Supported 00:09:34.704 Persistent Event Log Pages: Not Supported 00:09:34.704 Supported Log Pages Log Page: May Support 00:09:34.704 Commands Supported & Effects Log Page: Not Supported 00:09:34.704 Feature Identifiers & Effects Log Page:May Support 00:09:34.704 NVMe-MI Commands & Effects Log Page: May Support 00:09:34.704 Data Area 4 for Telemetry Log: Not Supported 00:09:34.704 Error Log Page Entries Supported: 1 00:09:34.704 Keep Alive: Not Supported 00:09:34.704 00:09:34.704 NVM Command Set Attributes 00:09:34.704 ========================== 00:09:34.704 Submission Queue Entry Size 00:09:34.704 Max: 64 00:09:34.704 Min: 64 00:09:34.704 Completion Queue Entry Size 00:09:34.704 Max: 16 00:09:34.704 Min: 16 00:09:34.704 Number of Namespaces: 256 00:09:34.704 Compare Command: Supported 00:09:34.704 Write Uncorrectable Command: Not Supported 00:09:34.704 Dataset Management Command: Supported 00:09:34.704 Write Zeroes Command: Supported 00:09:34.704 Set Features Save Field: Supported 00:09:34.704 Reservations: Not Supported 00:09:34.704 Timestamp: Supported 00:09:34.704 Copy: Supported 00:09:34.704 Volatile Write Cache: Present 00:09:34.704 Atomic Write Unit (Normal): 1 00:09:34.704 Atomic Write Unit (PFail): 1 00:09:34.704 Atomic Compare & Write Unit: 1 00:09:34.704 Fused Compare & Write: Not Supported 00:09:34.704 Scatter-Gather List 00:09:34.704 SGL Command Set: Supported 00:09:34.704 SGL Keyed: Not Supported 00:09:34.704 SGL Bit Bucket Descriptor: Not Supported 00:09:34.704 SGL Metadata Pointer: Not Supported 00:09:34.704 Oversized SGL: Not Supported 00:09:34.704 SGL Metadata Address: Not Supported 00:09:34.704 SGL Offset: Not Supported 00:09:34.704 Transport SGL Data Block: Not Supported 00:09:34.704 Replay Protected Memory Block: Not Supported 00:09:34.704 00:09:34.704 Firmware Slot Information 00:09:34.704 ========================= 00:09:34.704 Active slot: 1 00:09:34.704 Slot 1 Firmware Revision: 1.0 00:09:34.704 00:09:34.704 00:09:34.704 Commands Supported and Effects 00:09:34.704 ============================== 00:09:34.704 Admin Commands 00:09:34.704 -------------- 00:09:34.704 Delete I/O Submission Queue (00h): Supported 00:09:34.704 Create I/O Submission Queue (01h): Supported 00:09:34.704 Get Log Page (02h): Supported 00:09:34.704 Delete I/O Completion Queue (04h): Supported 00:09:34.704 Create I/O Completion Queue (05h): Supported 00:09:34.704 Identify (06h): Supported 00:09:34.704 Abort (08h): Supported 00:09:34.704 Set Features (09h): Supported 00:09:34.704 Get Features (0Ah): Supported 00:09:34.704 Asynchronous Event Request (0Ch): Supported 00:09:34.704 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:34.704 Directive Send (19h): Supported 00:09:34.704 Directive Receive (1Ah): Supported 00:09:34.704 Virtualization Management (1Ch): Supported 00:09:34.704 Doorbell Buffer Config (7Ch): Supported 00:09:34.704 Format NVM (80h): Supported LBA-Change 00:09:34.704 I/O Commands 00:09:34.704 ------------ 00:09:34.704 Flush (00h): Supported LBA-Change 00:09:34.704 Write (01h): Supported LBA-Change 00:09:34.704 Read (02h): Supported 00:09:34.704 Compare (05h): Supported 00:09:34.704 Write Zeroes (08h): Supported LBA-Change 00:09:34.704 Dataset Management (09h): Supported LBA-Change 00:09:34.704 Unknown (0Ch): Supported 00:09:34.704 Unknown (12h): Supported 00:09:34.704 Copy (19h): Supported LBA-Change 00:09:34.704 Unknown (1Dh): Supported LBA-Change 00:09:34.704 00:09:34.704 Error Log 00:09:34.704 ========= 00:09:34.704 00:09:34.704 Arbitration 00:09:34.704 =========== 00:09:34.704 Arbitration Burst: no limit 00:09:34.704 00:09:34.704 Power Management 00:09:34.704 ================ 00:09:34.704 Number of Power States: 1 00:09:34.704 Current Power State: Power State #0 00:09:34.704 Power State #0: 00:09:34.704 Max Power: 25.00 W 00:09:34.704 Non-Operational State: Operational 00:09:34.704 Entry Latency: 16 microseconds 00:09:34.704 Exit Latency: 4 microseconds 00:09:34.704 Relative Read Throughput: 0 00:09:34.704 Relative Read Latency: 0 00:09:34.704 Relative Write Throughput: 0 00:09:34.704 Relative Write Latency: 0 00:09:34.704 Idle Power: Not Reported 00:09:34.704 Active Power: Not Reported 00:09:34.704 Non-Operational Permissive Mode: Not Supported 00:09:34.704 00:09:34.704 Health Information 00:09:34.704 ================== 00:09:34.704 Critical Warnings: 00:09:34.704 Available Spare Space: OK 00:09:34.704 Temperature: OK 00:09:34.704 Device Reliability: OK 00:09:34.704 Read Only: No 00:09:34.704 Volatile Memory Backup: OK 00:09:34.704 Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.704 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:34.704 Available Spare: 0% 00:09:34.704 Available Spare Threshold: 0% 00:09:34.704 Life Percentage Used: 0% 00:09:34.704 Data Units Read: 1428 00:09:34.704 Data Units Written: 666 00:09:34.704 Host Read Commands: 62334 00:09:34.704 Host Write Commands: 30680 00:09:34.704 Controller Busy Time: 0 minutes 00:09:34.704 Power Cycles: 0 00:09:34.704 Power On Hours: 0 hours 00:09:34.704 Unsafe Shutdowns: 0 00:09:34.704 Unrecoverable Media Errors: 0 00:09:34.704 Lifetime Error Log Entries: 0 00:09:34.704 Warning Temperature Time: 0 minutes 00:09:34.704 Critical Temperature Time: 0 minutes 00:09:34.704 00:09:34.704 Number of Queues 00:09:34.704 ================ 00:09:34.704 Number of I/O Submission Queues: 64 00:09:34.704 Number of I/O Completion Queues: 64 00:09:34.704 00:09:34.704 ZNS Specific Controller Data 00:09:34.704 ============================ 00:09:34.704 Zone Append Size Limit: 0 00:09:34.704 00:09:34.704 00:09:34.704 Active Namespaces 00:09:34.704 ================= 00:09:34.704 Namespace ID:1 00:09:34.704 Error Recovery Timeout: Unlimited 00:09:34.704 Command Set Identifier: NVM (00h) 00:09:34.704 Deallocate: Supported 00:09:34.704 Deallocated/Unwritten Error: Supported 00:09:34.704 Deallocated Read Value: All 0x00 00:09:34.704 Deallocate in Write Zeroes: Not Supported 00:09:34.704 Deallocated Guard Field: 0xFFFF 00:09:34.704 Flush: Supported 00:09:34.704 Reservation: Not Supported 00:09:34.704 Namespace Sharing Capabilities: Multiple Controllers 00:09:34.704 Size (in LBAs): 262144 (1GiB) 00:09:34.704 Capacity (in LBAs): 262144 (1GiB) 00:09:34.704 Utilization (in LBAs): 262144 (1GiB) 00:09:34.704 Thin Provisioning: Not Supported 00:09:34.704 Per-NS Atomic Units: No 00:09:34.704 Maximum Single Source Range Length: 128 00:09:34.704 Maximum Copy Length: 128 00:09:34.704 Maximum Source Range Count: 128 00:09:34.704 NGUID/EUI64 Never Reused: No 00:09:34.704 Namespace Write Protected: No 00:09:34.704 Endurance group ID: 1 00:09:34.704 Number of LBA Formats: 8 00:09:34.704 Current LBA Format: LBA Format #04 00:09:34.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:34.704 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:34.704 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:34.704 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:34.704 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:34.704 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:34.704 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:34.704 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:34.704 00:09:34.704 Get Feature FDP: 00:09:34.704 ================ 00:09:34.704 Enabled: Yes 00:09:34.704 FDP configuration index: 0 00:09:34.704 00:09:34.704 FDP configurations log page 00:09:34.704 =========================== 00:09:34.704 Number of FDP configurations: 1 00:09:34.704 Version: 0 00:09:34.704 Size: 112 00:09:34.704 FDP Configuration Descriptor: 0 00:09:34.704 Descriptor Size: 96 00:09:34.704 Reclaim Group Identifier format: 2 00:09:34.704 FDP Volatile Write Cache: Not Present 00:09:34.704 FDP Configuration: Valid 00:09:34.704 Vendor Specific Size: 0 00:09:34.704 Number of Reclaim Groups: 2 00:09:34.704 Number of Recalim Unit Handles: 8 00:09:34.704 Max Placement Identifiers: 128 00:09:34.704 Number of Namespaces Suppprted: 256 00:09:34.704 Reclaim unit Nominal Size: 6000000 bytes 00:09:34.704 Estimated Reclaim Unit Time Limit: Not Reported 00:09:34.704 RUH Desc #000: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #001: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #002: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #003: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #004: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #005: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #006: RUH Type: Initially Isolated 00:09:34.705 RUH Desc #007: RUH Type: Initially Isolated 00:09:34.705 00:09:34.705 FDP reclaim unit handle usage log page 00:09:34.705 ====================================== 00:09:34.705 Number of Reclaim Unit Handles: 8 00:09:34.705 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:34.705 RUH Usage Desc #001: RUH Attributes: Unused 00:09:34.705 RUH Usage Desc #002: RUH Attributes: Unused 00:09:34.705 RUH Usage Desc #003: RUH Attributes: Unused 00:09:34.705 RUH Usage Desc #004: RUH Attributes: Unused 00:09:34.705 RUH Usage Desc #005: RUH Attributes: Unused 00:09:34.705 RUH Usage Desc #006: RUH Attributes: Unused 00:09:34.705 RUH Usage Desc #007: RUH Attributes: Unused 00:09:34.705 00:09:34.705 FDP statistics log page 00:09:34.705 ======================= 00:09:34.705 Host bytes with metadata written: 434274304 00:09:34.705 Media bytes with metadata written: 434364416 00:09:34.705 Media bytes erased: 0 00:09:34.705 00:09:34.705 FDP events log page 00:09:34.705 =================== 00:09:34.705 Number of FDP events: 0 00:09:34.705 00:09:34.705 00:09:34.705 real 0m1.085s 00:09:34.705 user 0m0.378s 00:09:34.705 sys 0m0.508s 00:09:34.705 20:02:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.705 20:02:42 -- common/autotest_common.sh@10 -- # set +x 00:09:34.705 ************************************ 00:09:34.705 END TEST nvme_identify 00:09:34.705 ************************************ 00:09:34.705 20:02:42 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:34.705 20:02:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:34.705 20:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.705 20:02:42 -- common/autotest_common.sh@10 -- # set +x 00:09:34.705 ************************************ 00:09:34.705 START TEST nvme_perf 00:09:34.705 ************************************ 00:09:34.705 20:02:42 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:34.705 20:02:42 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:36.084 Initializing NVMe Controllers 00:09:36.084 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:36.084 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:36.084 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:36.084 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:36.084 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:36.084 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:36.084 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:36.084 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:36.084 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:36.084 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:36.084 Initialization complete. Launching workers. 00:09:36.084 ======================================================== 00:09:36.084 Latency(us) 00:09:36.084 Device Information : IOPS MiB/s Average min max 00:09:36.084 PCIE (0000:00:09.0) NSID 1 from core 0: 19168.52 224.63 6676.20 5138.24 29273.98 00:09:36.084 PCIE (0000:00:06.0) NSID 1 from core 0: 19168.52 224.63 6670.06 4959.73 28394.59 00:09:36.084 PCIE (0000:00:07.0) NSID 1 from core 0: 19168.52 224.63 6665.32 5090.92 26974.87 00:09:36.084 PCIE (0000:00:08.0) NSID 1 from core 0: 19168.52 224.63 6659.82 5165.74 26484.01 00:09:36.084 PCIE (0000:00:08.0) NSID 2 from core 0: 19168.52 224.63 6654.41 5186.92 25190.86 00:09:36.084 PCIE (0000:00:08.0) NSID 3 from core 0: 19295.46 226.12 6605.53 5136.74 17076.66 00:09:36.084 ======================================================== 00:09:36.084 Total : 115138.05 1349.27 6655.17 4959.73 29273.98 00:09:36.084 00:09:36.084 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:36.084 ================================================================================= 00:09:36.084 1.00000% : 5318.498us 00:09:36.084 10.00000% : 5570.560us 00:09:36.084 25.00000% : 5873.034us 00:09:36.084 50.00000% : 6377.157us 00:09:36.084 75.00000% : 6856.074us 00:09:36.084 90.00000% : 7360.197us 00:09:36.084 95.00000% : 9275.865us 00:09:36.084 98.00000% : 11241.945us 00:09:36.084 99.00000% : 12250.191us 00:09:36.084 99.50000% : 27020.997us 00:09:36.084 99.90000% : 28835.840us 00:09:36.084 99.99000% : 29239.138us 00:09:36.084 99.99900% : 29440.788us 00:09:36.084 99.99990% : 29440.788us 00:09:36.084 99.99999% : 29440.788us 00:09:36.084 00:09:36.084 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:36.084 ================================================================================= 00:09:36.084 1.00000% : 5167.262us 00:09:36.084 10.00000% : 5444.529us 00:09:36.084 25.00000% : 5822.622us 00:09:36.084 50.00000% : 6377.157us 00:09:36.084 75.00000% : 6956.898us 00:09:36.084 90.00000% : 7461.022us 00:09:36.084 95.00000% : 9376.689us 00:09:36.084 98.00000% : 10939.471us 00:09:36.084 99.00000% : 12199.778us 00:09:36.084 99.50000% : 26012.751us 00:09:36.084 99.90000% : 28029.243us 00:09:36.084 99.99000% : 28432.542us 00:09:36.084 99.99900% : 28432.542us 00:09:36.084 99.99990% : 28432.542us 00:09:36.084 99.99999% : 28432.542us 00:09:36.084 00:09:36.084 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:36.084 ================================================================================= 00:09:36.084 1.00000% : 5318.498us 00:09:36.084 10.00000% : 5570.560us 00:09:36.084 25.00000% : 5873.034us 00:09:36.084 50.00000% : 6377.157us 00:09:36.084 75.00000% : 6856.074us 00:09:36.084 90.00000% : 7461.022us 00:09:36.084 95.00000% : 9527.926us 00:09:36.084 98.00000% : 10536.172us 00:09:36.084 99.00000% : 11241.945us 00:09:36.084 99.50000% : 24702.031us 00:09:36.084 99.90000% : 26617.698us 00:09:36.084 99.99000% : 27020.997us 00:09:36.084 99.99900% : 27020.997us 00:09:36.084 99.99990% : 27020.997us 00:09:36.084 99.99999% : 27020.997us 00:09:36.084 00:09:36.084 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:36.084 ================================================================================= 00:09:36.084 1.00000% : 5318.498us 00:09:36.084 10.00000% : 5570.560us 00:09:36.084 25.00000% : 5873.034us 00:09:36.084 50.00000% : 6351.951us 00:09:36.084 75.00000% : 6856.074us 00:09:36.084 90.00000% : 7511.434us 00:09:36.084 95.00000% : 9225.452us 00:09:36.084 98.00000% : 10788.234us 00:09:36.084 99.00000% : 11796.480us 00:09:36.084 99.50000% : 24197.908us 00:09:36.084 99.90000% : 26214.400us 00:09:36.084 99.99000% : 26617.698us 00:09:36.084 99.99900% : 26617.698us 00:09:36.084 99.99990% : 26617.698us 00:09:36.084 99.99999% : 26617.698us 00:09:36.084 00:09:36.084 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:36.084 ================================================================================= 00:09:36.084 1.00000% : 5318.498us 00:09:36.084 10.00000% : 5570.560us 00:09:36.084 25.00000% : 5873.034us 00:09:36.084 50.00000% : 6377.157us 00:09:36.084 75.00000% : 6856.074us 00:09:36.084 90.00000% : 7511.434us 00:09:36.084 95.00000% : 9074.215us 00:09:36.084 98.00000% : 10939.471us 00:09:36.084 99.00000% : 11645.243us 00:09:36.084 99.50000% : 22887.188us 00:09:36.084 99.90000% : 24802.855us 00:09:36.084 99.99000% : 25206.154us 00:09:36.084 99.99900% : 25206.154us 00:09:36.084 99.99990% : 25206.154us 00:09:36.084 99.99999% : 25206.154us 00:09:36.084 00:09:36.084 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:36.084 ================================================================================= 00:09:36.084 1.00000% : 5318.498us 00:09:36.084 10.00000% : 5570.560us 00:09:36.084 25.00000% : 5898.240us 00:09:36.084 50.00000% : 6377.157us 00:09:36.084 75.00000% : 6856.074us 00:09:36.084 90.00000% : 7410.609us 00:09:36.085 95.00000% : 9275.865us 00:09:36.085 98.00000% : 11241.945us 00:09:36.085 99.00000% : 12300.603us 00:09:36.085 99.50000% : 14720.394us 00:09:36.085 99.90000% : 16636.062us 00:09:36.085 99.99000% : 17140.185us 00:09:36.085 99.99900% : 17140.185us 00:09:36.085 99.99990% : 17140.185us 00:09:36.085 99.99999% : 17140.185us 00:09:36.085 00:09:36.085 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:36.085 ============================================================================== 00:09:36.085 Range in us Cumulative IO count 00:09:36.085 5116.849 - 5142.055: 0.0052% ( 1) 00:09:36.085 5142.055 - 5167.262: 0.0207% ( 3) 00:09:36.085 5167.262 - 5192.468: 0.0310% ( 2) 00:09:36.085 5192.468 - 5217.674: 0.0621% ( 6) 00:09:36.085 5217.674 - 5242.880: 0.2070% ( 28) 00:09:36.085 5242.880 - 5268.086: 0.4450% ( 46) 00:09:36.085 5268.086 - 5293.292: 0.7864% ( 66) 00:09:36.085 5293.292 - 5318.498: 1.3038% ( 100) 00:09:36.085 5318.498 - 5343.705: 2.0333% ( 141) 00:09:36.085 5343.705 - 5368.911: 2.7680% ( 142) 00:09:36.085 5368.911 - 5394.117: 3.4872% ( 139) 00:09:36.085 5394.117 - 5419.323: 4.1753% ( 133) 00:09:36.085 5419.323 - 5444.529: 4.9721% ( 154) 00:09:36.085 5444.529 - 5469.735: 5.7844% ( 157) 00:09:36.085 5469.735 - 5494.942: 6.7519% ( 187) 00:09:36.085 5494.942 - 5520.148: 7.9263% ( 227) 00:09:36.085 5520.148 - 5545.354: 9.0594% ( 219) 00:09:36.085 5545.354 - 5570.560: 10.1976% ( 220) 00:09:36.085 5570.560 - 5595.766: 11.3618% ( 225) 00:09:36.085 5595.766 - 5620.972: 12.5362% ( 227) 00:09:36.085 5620.972 - 5646.178: 13.7521% ( 235) 00:09:36.085 5646.178 - 5671.385: 14.9783% ( 237) 00:09:36.085 5671.385 - 5696.591: 16.2355% ( 243) 00:09:36.085 5696.591 - 5721.797: 17.4979% ( 244) 00:09:36.085 5721.797 - 5747.003: 18.7190% ( 236) 00:09:36.085 5747.003 - 5772.209: 20.0228% ( 252) 00:09:36.085 5772.209 - 5797.415: 21.2748% ( 242) 00:09:36.085 5797.415 - 5822.622: 22.5373% ( 244) 00:09:36.085 5822.622 - 5847.828: 23.7841% ( 241) 00:09:36.085 5847.828 - 5873.034: 25.0880% ( 252) 00:09:36.085 5873.034 - 5898.240: 26.4176% ( 257) 00:09:36.085 5898.240 - 5923.446: 27.6800% ( 244) 00:09:36.085 5923.446 - 5948.652: 28.9994% ( 255) 00:09:36.085 5948.652 - 5973.858: 30.3187% ( 255) 00:09:36.085 5973.858 - 5999.065: 31.6329% ( 254) 00:09:36.085 5999.065 - 6024.271: 32.9418% ( 253) 00:09:36.085 6024.271 - 6049.477: 34.2301% ( 249) 00:09:36.085 6049.477 - 6074.683: 35.5495% ( 255) 00:09:36.085 6074.683 - 6099.889: 36.8481% ( 251) 00:09:36.085 6099.889 - 6125.095: 38.1519% ( 252) 00:09:36.085 6125.095 - 6150.302: 39.4661% ( 254) 00:09:36.085 6150.302 - 6175.508: 40.7906% ( 256) 00:09:36.085 6175.508 - 6200.714: 42.0788% ( 249) 00:09:36.085 6200.714 - 6225.920: 43.3464% ( 245) 00:09:36.085 6225.920 - 6251.126: 44.6813% ( 258) 00:09:36.085 6251.126 - 6276.332: 46.0058% ( 256) 00:09:36.085 6276.332 - 6301.538: 47.3148% ( 253) 00:09:36.085 6301.538 - 6326.745: 48.6082% ( 250) 00:09:36.085 6326.745 - 6351.951: 49.9431% ( 258) 00:09:36.085 6351.951 - 6377.157: 51.2572% ( 254) 00:09:36.085 6377.157 - 6402.363: 52.5973% ( 259) 00:09:36.085 6402.363 - 6427.569: 53.8545% ( 243) 00:09:36.085 6427.569 - 6452.775: 55.2204% ( 264) 00:09:36.085 6452.775 - 6503.188: 57.8746% ( 513) 00:09:36.085 6503.188 - 6553.600: 60.5546% ( 518) 00:09:36.085 6553.600 - 6604.012: 63.2554% ( 522) 00:09:36.085 6604.012 - 6654.425: 65.9510% ( 521) 00:09:36.085 6654.425 - 6704.837: 68.6155% ( 515) 00:09:36.085 6704.837 - 6755.249: 71.3162% ( 522) 00:09:36.085 6755.249 - 6805.662: 74.0532% ( 529) 00:09:36.085 6805.662 - 6856.074: 76.7695% ( 525) 00:09:36.085 6856.074 - 6906.486: 79.4185% ( 512) 00:09:36.085 6906.486 - 6956.898: 81.9123% ( 482) 00:09:36.085 6956.898 - 7007.311: 83.9766% ( 399) 00:09:36.085 7007.311 - 7057.723: 85.6322% ( 320) 00:09:36.085 7057.723 - 7108.135: 87.0550% ( 275) 00:09:36.085 7108.135 - 7158.548: 88.2036% ( 222) 00:09:36.085 7158.548 - 7208.960: 89.0004% ( 154) 00:09:36.085 7208.960 - 7259.372: 89.5333% ( 103) 00:09:36.085 7259.372 - 7309.785: 89.9834% ( 87) 00:09:36.085 7309.785 - 7360.197: 90.3353% ( 68) 00:09:36.085 7360.197 - 7410.609: 90.6198% ( 55) 00:09:36.085 7410.609 - 7461.022: 90.8889% ( 52) 00:09:36.085 7461.022 - 7511.434: 91.1269% ( 46) 00:09:36.085 7511.434 - 7561.846: 91.2976% ( 33) 00:09:36.085 7561.846 - 7612.258: 91.4528% ( 30) 00:09:36.085 7612.258 - 7662.671: 91.6029% ( 29) 00:09:36.085 7662.671 - 7713.083: 91.7581% ( 30) 00:09:36.085 7713.083 - 7763.495: 91.8926% ( 26) 00:09:36.085 7763.495 - 7813.908: 92.0219% ( 25) 00:09:36.085 7813.908 - 7864.320: 92.1151% ( 18) 00:09:36.085 7864.320 - 7914.732: 92.2392% ( 24) 00:09:36.085 7914.732 - 7965.145: 92.3634% ( 24) 00:09:36.085 7965.145 - 8015.557: 92.4772% ( 22) 00:09:36.085 8015.557 - 8065.969: 92.5807% ( 20) 00:09:36.085 8065.969 - 8116.382: 92.6945% ( 22) 00:09:36.085 8116.382 - 8166.794: 92.8084% ( 22) 00:09:36.085 8166.794 - 8217.206: 92.9118% ( 20) 00:09:36.085 8217.206 - 8267.618: 93.0050% ( 18) 00:09:36.085 8267.618 - 8318.031: 93.1033% ( 19) 00:09:36.085 8318.031 - 8368.443: 93.2171% ( 22) 00:09:36.085 8368.443 - 8418.855: 93.3206% ( 20) 00:09:36.085 8418.855 - 8469.268: 93.4189% ( 19) 00:09:36.085 8469.268 - 8519.680: 93.5120% ( 18) 00:09:36.085 8519.680 - 8570.092: 93.6207% ( 21) 00:09:36.085 8570.092 - 8620.505: 93.7397% ( 23) 00:09:36.085 8620.505 - 8670.917: 93.8483% ( 21) 00:09:36.085 8670.917 - 8721.329: 93.9725% ( 24) 00:09:36.085 8721.329 - 8771.742: 94.0656% ( 18) 00:09:36.085 8771.742 - 8822.154: 94.1432% ( 15) 00:09:36.085 8822.154 - 8872.566: 94.2260% ( 16) 00:09:36.085 8872.566 - 8922.978: 94.3243% ( 19) 00:09:36.085 8922.978 - 8973.391: 94.4329% ( 21) 00:09:36.085 8973.391 - 9023.803: 94.5364% ( 20) 00:09:36.085 9023.803 - 9074.215: 94.6399% ( 20) 00:09:36.085 9074.215 - 9124.628: 94.7537% ( 22) 00:09:36.085 9124.628 - 9175.040: 94.8675% ( 22) 00:09:36.085 9175.040 - 9225.452: 94.9865% ( 23) 00:09:36.085 9225.452 - 9275.865: 95.1055% ( 23) 00:09:36.085 9275.865 - 9326.277: 95.2142% ( 21) 00:09:36.085 9326.277 - 9376.689: 95.3073% ( 18) 00:09:36.085 9376.689 - 9427.102: 95.4005% ( 18) 00:09:36.085 9427.102 - 9477.514: 95.4781% ( 15) 00:09:36.085 9477.514 - 9527.926: 95.5971% ( 23) 00:09:36.085 9527.926 - 9578.338: 95.7005% ( 20) 00:09:36.085 9578.338 - 9628.751: 95.8195% ( 23) 00:09:36.085 9628.751 - 9679.163: 95.9178% ( 19) 00:09:36.085 9679.163 - 9729.575: 96.0317% ( 22) 00:09:36.085 9729.575 - 9779.988: 96.1403% ( 21) 00:09:36.085 9779.988 - 9830.400: 96.2593% ( 23) 00:09:36.085 9830.400 - 9880.812: 96.3680% ( 21) 00:09:36.085 9880.812 - 9931.225: 96.4663% ( 19) 00:09:36.085 9931.225 - 9981.637: 96.5542% ( 17) 00:09:36.085 9981.637 - 10032.049: 96.6474% ( 18) 00:09:36.085 10032.049 - 10082.462: 96.7146% ( 13) 00:09:36.085 10082.462 - 10132.874: 96.7767% ( 12) 00:09:36.085 10132.874 - 10183.286: 96.8336% ( 11) 00:09:36.085 10183.286 - 10233.698: 96.8905% ( 11) 00:09:36.085 10233.698 - 10284.111: 96.9526% ( 12) 00:09:36.085 10284.111 - 10334.523: 97.0199% ( 13) 00:09:36.085 10334.523 - 10384.935: 97.0716% ( 10) 00:09:36.085 10384.935 - 10435.348: 97.1337% ( 12) 00:09:36.085 10435.348 - 10485.760: 97.1906% ( 11) 00:09:36.085 10485.760 - 10536.172: 97.2579% ( 13) 00:09:36.085 10536.172 - 10586.585: 97.3148% ( 11) 00:09:36.085 10586.585 - 10636.997: 97.3613% ( 9) 00:09:36.085 10636.997 - 10687.409: 97.4079% ( 9) 00:09:36.085 10687.409 - 10737.822: 97.4389% ( 6) 00:09:36.085 10737.822 - 10788.234: 97.4907% ( 10) 00:09:36.085 10788.234 - 10838.646: 97.5528% ( 12) 00:09:36.085 10838.646 - 10889.058: 97.6149% ( 12) 00:09:36.085 10889.058 - 10939.471: 97.6718% ( 11) 00:09:36.085 10939.471 - 10989.883: 97.7287% ( 11) 00:09:36.085 10989.883 - 11040.295: 97.7908% ( 12) 00:09:36.085 11040.295 - 11090.708: 97.8580% ( 13) 00:09:36.085 11090.708 - 11141.120: 97.9253% ( 13) 00:09:36.085 11141.120 - 11191.532: 97.9925% ( 13) 00:09:36.085 11191.532 - 11241.945: 98.0546% ( 12) 00:09:36.085 11241.945 - 11292.357: 98.1167% ( 12) 00:09:36.085 11292.357 - 11342.769: 98.1840% ( 13) 00:09:36.085 11342.769 - 11393.182: 98.2357% ( 10) 00:09:36.085 11393.182 - 11443.594: 98.2771% ( 8) 00:09:36.085 11443.594 - 11494.006: 98.3237% ( 9) 00:09:36.085 11494.006 - 11544.418: 98.3754% ( 10) 00:09:36.085 11544.418 - 11594.831: 98.4168% ( 8) 00:09:36.085 11594.831 - 11645.243: 98.4685% ( 10) 00:09:36.085 11645.243 - 11695.655: 98.5203% ( 10) 00:09:36.085 11695.655 - 11746.068: 98.5617% ( 8) 00:09:36.085 11746.068 - 11796.480: 98.6134% ( 10) 00:09:36.085 11796.480 - 11846.892: 98.6600% ( 9) 00:09:36.085 11846.892 - 11897.305: 98.7065% ( 9) 00:09:36.085 11897.305 - 11947.717: 98.7583% ( 10) 00:09:36.085 11947.717 - 11998.129: 98.8100% ( 10) 00:09:36.085 11998.129 - 12048.542: 98.8566% ( 9) 00:09:36.085 12048.542 - 12098.954: 98.9031% ( 9) 00:09:36.085 12098.954 - 12149.366: 98.9549% ( 10) 00:09:36.085 12149.366 - 12199.778: 98.9963% ( 8) 00:09:36.085 12199.778 - 12250.191: 99.0480% ( 10) 00:09:36.085 12250.191 - 12300.603: 99.0946% ( 9) 00:09:36.085 12300.603 - 12351.015: 99.1463% ( 10) 00:09:36.085 12351.015 - 12401.428: 99.1981% ( 10) 00:09:36.085 12401.428 - 12451.840: 99.2394% ( 8) 00:09:36.085 12451.840 - 12502.252: 99.2653% ( 5) 00:09:36.085 12502.252 - 12552.665: 99.2808% ( 3) 00:09:36.085 12552.665 - 12603.077: 99.3015% ( 4) 00:09:36.085 12603.077 - 12653.489: 99.3222% ( 4) 00:09:36.085 12653.489 - 12703.902: 99.3377% ( 3) 00:09:36.085 26012.751 - 26214.400: 99.3429% ( 1) 00:09:36.085 26214.400 - 26416.049: 99.3895% ( 9) 00:09:36.085 26416.049 - 26617.698: 99.4309% ( 8) 00:09:36.085 26617.698 - 26819.348: 99.4774% ( 9) 00:09:36.085 26819.348 - 27020.997: 99.5240% ( 9) 00:09:36.085 27020.997 - 27222.646: 99.5654% ( 8) 00:09:36.085 27222.646 - 27424.295: 99.6120% ( 9) 00:09:36.086 27424.295 - 27625.945: 99.6327% ( 4) 00:09:36.086 27625.945 - 27827.594: 99.6792% ( 9) 00:09:36.086 27827.594 - 28029.243: 99.7258% ( 9) 00:09:36.086 28029.243 - 28230.892: 99.7672% ( 8) 00:09:36.086 28230.892 - 28432.542: 99.8137% ( 9) 00:09:36.086 28432.542 - 28634.191: 99.8603% ( 9) 00:09:36.086 28634.191 - 28835.840: 99.9017% ( 8) 00:09:36.086 28835.840 - 29037.489: 99.9483% ( 9) 00:09:36.086 29037.489 - 29239.138: 99.9948% ( 9) 00:09:36.086 29239.138 - 29440.788: 100.0000% ( 1) 00:09:36.086 00:09:36.086 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:36.086 ============================================================================== 00:09:36.086 Range in us Cumulative IO count 00:09:36.086 4940.406 - 4965.612: 0.0052% ( 1) 00:09:36.086 4965.612 - 4990.818: 0.0310% ( 5) 00:09:36.086 4990.818 - 5016.025: 0.0466% ( 3) 00:09:36.086 5016.025 - 5041.231: 0.0776% ( 6) 00:09:36.086 5041.231 - 5066.437: 0.1190% ( 8) 00:09:36.086 5066.437 - 5091.643: 0.2483% ( 25) 00:09:36.086 5091.643 - 5116.849: 0.5174% ( 52) 00:09:36.086 5116.849 - 5142.055: 0.9054% ( 75) 00:09:36.086 5142.055 - 5167.262: 1.4538% ( 106) 00:09:36.086 5167.262 - 5192.468: 2.0023% ( 106) 00:09:36.086 5192.468 - 5217.674: 2.6283% ( 121) 00:09:36.086 5217.674 - 5242.880: 3.2337% ( 117) 00:09:36.086 5242.880 - 5268.086: 3.8390% ( 117) 00:09:36.086 5268.086 - 5293.292: 4.6099% ( 149) 00:09:36.086 5293.292 - 5318.498: 5.4067% ( 154) 00:09:36.086 5318.498 - 5343.705: 6.2345% ( 160) 00:09:36.086 5343.705 - 5368.911: 7.1761% ( 182) 00:09:36.086 5368.911 - 5394.117: 8.0971% ( 178) 00:09:36.086 5394.117 - 5419.323: 9.0749% ( 189) 00:09:36.086 5419.323 - 5444.529: 10.0321% ( 185) 00:09:36.086 5444.529 - 5469.735: 11.0410% ( 195) 00:09:36.086 5469.735 - 5494.942: 12.1275% ( 210) 00:09:36.086 5494.942 - 5520.148: 13.2140% ( 210) 00:09:36.086 5520.148 - 5545.354: 14.2384% ( 198) 00:09:36.086 5545.354 - 5570.560: 15.3456% ( 214) 00:09:36.086 5570.560 - 5595.766: 16.2821% ( 181) 00:09:36.086 5595.766 - 5620.972: 17.4255% ( 221) 00:09:36.086 5620.972 - 5646.178: 18.4240% ( 193) 00:09:36.086 5646.178 - 5671.385: 19.5623% ( 220) 00:09:36.086 5671.385 - 5696.591: 20.5712% ( 195) 00:09:36.086 5696.591 - 5721.797: 21.6422% ( 207) 00:09:36.086 5721.797 - 5747.003: 22.7287% ( 210) 00:09:36.086 5747.003 - 5772.209: 23.8566% ( 218) 00:09:36.086 5772.209 - 5797.415: 24.9431% ( 210) 00:09:36.086 5797.415 - 5822.622: 26.0762% ( 219) 00:09:36.086 5822.622 - 5847.828: 27.1989% ( 217) 00:09:36.086 5847.828 - 5873.034: 28.2492% ( 203) 00:09:36.086 5873.034 - 5898.240: 29.4133% ( 225) 00:09:36.086 5898.240 - 5923.446: 30.3911% ( 189) 00:09:36.086 5923.446 - 5948.652: 31.6432% ( 242) 00:09:36.086 5948.652 - 5973.858: 32.6935% ( 203) 00:09:36.086 5973.858 - 5999.065: 33.8421% ( 222) 00:09:36.086 5999.065 - 6024.271: 34.9338% ( 211) 00:09:36.086 6024.271 - 6049.477: 36.0720% ( 220) 00:09:36.086 6049.477 - 6074.683: 37.1740% ( 213) 00:09:36.086 6074.683 - 6099.889: 38.3382% ( 225) 00:09:36.086 6099.889 - 6125.095: 39.4350% ( 212) 00:09:36.086 6125.095 - 6150.302: 40.4957% ( 205) 00:09:36.086 6150.302 - 6175.508: 41.7477% ( 242) 00:09:36.086 6175.508 - 6200.714: 42.7980% ( 203) 00:09:36.086 6200.714 - 6225.920: 43.8897% ( 211) 00:09:36.086 6225.920 - 6251.126: 44.9710% ( 209) 00:09:36.086 6251.126 - 6276.332: 46.0886% ( 216) 00:09:36.086 6276.332 - 6301.538: 47.1596% ( 207) 00:09:36.086 6301.538 - 6326.745: 48.2823% ( 217) 00:09:36.086 6326.745 - 6351.951: 49.4257% ( 221) 00:09:36.086 6351.951 - 6377.157: 50.5484% ( 217) 00:09:36.086 6377.157 - 6402.363: 51.6660% ( 216) 00:09:36.086 6402.363 - 6427.569: 52.7680% ( 213) 00:09:36.086 6427.569 - 6452.775: 53.8959% ( 218) 00:09:36.086 6452.775 - 6503.188: 56.1258% ( 431) 00:09:36.086 6503.188 - 6553.600: 58.3558% ( 431) 00:09:36.086 6553.600 - 6604.012: 60.6219% ( 438) 00:09:36.086 6604.012 - 6654.425: 62.9036% ( 441) 00:09:36.086 6654.425 - 6704.837: 65.1438% ( 433) 00:09:36.086 6704.837 - 6755.249: 67.3893% ( 434) 00:09:36.086 6755.249 - 6805.662: 69.7227% ( 451) 00:09:36.086 6805.662 - 6856.074: 71.9423% ( 429) 00:09:36.086 6856.074 - 6906.486: 74.2705% ( 450) 00:09:36.086 6906.486 - 6956.898: 76.5884% ( 448) 00:09:36.086 6956.898 - 7007.311: 78.8700% ( 441) 00:09:36.086 7007.311 - 7057.723: 81.0948% ( 430) 00:09:36.086 7057.723 - 7108.135: 83.2833% ( 423) 00:09:36.086 7108.135 - 7158.548: 85.0321% ( 338) 00:09:36.086 7158.548 - 7208.960: 86.5584% ( 295) 00:09:36.086 7208.960 - 7259.372: 87.8053% ( 241) 00:09:36.086 7259.372 - 7309.785: 88.7365% ( 180) 00:09:36.086 7309.785 - 7360.197: 89.4712% ( 142) 00:09:36.086 7360.197 - 7410.609: 89.9886% ( 100) 00:09:36.086 7410.609 - 7461.022: 90.3301% ( 66) 00:09:36.086 7461.022 - 7511.434: 90.6250% ( 57) 00:09:36.086 7511.434 - 7561.846: 90.8578% ( 45) 00:09:36.086 7561.846 - 7612.258: 91.0958% ( 46) 00:09:36.086 7612.258 - 7662.671: 91.3079% ( 41) 00:09:36.086 7662.671 - 7713.083: 91.5097% ( 39) 00:09:36.086 7713.083 - 7763.495: 91.7063% ( 38) 00:09:36.086 7763.495 - 7813.908: 91.8667% ( 31) 00:09:36.086 7813.908 - 7864.320: 92.0116% ( 28) 00:09:36.086 7864.320 - 7914.732: 92.1306% ( 23) 00:09:36.086 7914.732 - 7965.145: 92.2341% ( 20) 00:09:36.086 7965.145 - 8015.557: 92.3324% ( 19) 00:09:36.086 8015.557 - 8065.969: 92.4307% ( 19) 00:09:36.086 8065.969 - 8116.382: 92.5341% ( 20) 00:09:36.086 8116.382 - 8166.794: 92.6325% ( 19) 00:09:36.086 8166.794 - 8217.206: 92.7308% ( 19) 00:09:36.086 8217.206 - 8267.618: 92.8135% ( 16) 00:09:36.086 8267.618 - 8318.031: 92.9118% ( 19) 00:09:36.086 8318.031 - 8368.443: 93.0050% ( 18) 00:09:36.086 8368.443 - 8418.855: 93.0981% ( 18) 00:09:36.086 8418.855 - 8469.268: 93.1757% ( 15) 00:09:36.086 8469.268 - 8519.680: 93.2533% ( 15) 00:09:36.086 8519.680 - 8570.092: 93.3361% ( 16) 00:09:36.086 8570.092 - 8620.505: 93.4240% ( 17) 00:09:36.086 8620.505 - 8670.917: 93.5327% ( 21) 00:09:36.086 8670.917 - 8721.329: 93.6258% ( 18) 00:09:36.086 8721.329 - 8771.742: 93.7138% ( 17) 00:09:36.086 8771.742 - 8822.154: 93.8121% ( 19) 00:09:36.086 8822.154 - 8872.566: 93.9104% ( 19) 00:09:36.086 8872.566 - 8922.978: 94.0087% ( 19) 00:09:36.086 8922.978 - 8973.391: 94.1018% ( 18) 00:09:36.086 8973.391 - 9023.803: 94.2053% ( 20) 00:09:36.086 9023.803 - 9074.215: 94.3139% ( 21) 00:09:36.086 9074.215 - 9124.628: 94.4381% ( 24) 00:09:36.086 9124.628 - 9175.040: 94.5468% ( 21) 00:09:36.086 9175.040 - 9225.452: 94.6606% ( 22) 00:09:36.086 9225.452 - 9275.865: 94.7899% ( 25) 00:09:36.086 9275.865 - 9326.277: 94.8934% ( 20) 00:09:36.086 9326.277 - 9376.689: 95.0021% ( 21) 00:09:36.086 9376.689 - 9427.102: 95.0849% ( 16) 00:09:36.086 9427.102 - 9477.514: 95.1935% ( 21) 00:09:36.086 9477.514 - 9527.926: 95.2918% ( 19) 00:09:36.086 9527.926 - 9578.338: 95.3953% ( 20) 00:09:36.086 9578.338 - 9628.751: 95.4832% ( 17) 00:09:36.086 9628.751 - 9679.163: 95.5971% ( 22) 00:09:36.086 9679.163 - 9729.575: 95.6850% ( 17) 00:09:36.086 9729.575 - 9779.988: 95.7988% ( 22) 00:09:36.086 9779.988 - 9830.400: 95.8868% ( 17) 00:09:36.086 9830.400 - 9880.812: 95.9748% ( 17) 00:09:36.086 9880.812 - 9931.225: 96.0989% ( 24) 00:09:36.086 9931.225 - 9981.637: 96.1972% ( 19) 00:09:36.086 9981.637 - 10032.049: 96.3111% ( 22) 00:09:36.086 10032.049 - 10082.462: 96.4042% ( 18) 00:09:36.086 10082.462 - 10132.874: 96.5025% ( 19) 00:09:36.086 10132.874 - 10183.286: 96.6163% ( 22) 00:09:36.086 10183.286 - 10233.698: 96.7094% ( 18) 00:09:36.086 10233.698 - 10284.111: 96.8129% ( 20) 00:09:36.086 10284.111 - 10334.523: 96.9164% ( 20) 00:09:36.086 10334.523 - 10384.935: 97.0199% ( 20) 00:09:36.086 10384.935 - 10435.348: 97.1285% ( 21) 00:09:36.086 10435.348 - 10485.760: 97.2268% ( 19) 00:09:36.086 10485.760 - 10536.172: 97.3200% ( 18) 00:09:36.086 10536.172 - 10586.585: 97.4286% ( 21) 00:09:36.086 10586.585 - 10636.997: 97.5114% ( 16) 00:09:36.086 10636.997 - 10687.409: 97.5942% ( 16) 00:09:36.086 10687.409 - 10737.822: 97.6925% ( 19) 00:09:36.086 10737.822 - 10788.234: 97.7804% ( 17) 00:09:36.086 10788.234 - 10838.646: 97.8736% ( 18) 00:09:36.086 10838.646 - 10889.058: 97.9563% ( 16) 00:09:36.086 10889.058 - 10939.471: 98.0081% ( 10) 00:09:36.086 10939.471 - 10989.883: 98.0495% ( 8) 00:09:36.086 10989.883 - 11040.295: 98.1115% ( 12) 00:09:36.086 11040.295 - 11090.708: 98.1633% ( 10) 00:09:36.086 11090.708 - 11141.120: 98.2099% ( 9) 00:09:36.086 11141.120 - 11191.532: 98.2616% ( 10) 00:09:36.086 11191.532 - 11241.945: 98.3288% ( 13) 00:09:36.086 11241.945 - 11292.357: 98.3702% ( 8) 00:09:36.086 11292.357 - 11342.769: 98.4272% ( 11) 00:09:36.086 11342.769 - 11393.182: 98.4996% ( 14) 00:09:36.086 11393.182 - 11443.594: 98.5462% ( 9) 00:09:36.086 11443.594 - 11494.006: 98.5927% ( 9) 00:09:36.086 11494.006 - 11544.418: 98.6445% ( 10) 00:09:36.086 11544.418 - 11594.831: 98.6755% ( 6) 00:09:36.086 11594.831 - 11645.243: 98.7221% ( 9) 00:09:36.086 11645.243 - 11695.655: 98.7583% ( 7) 00:09:36.086 11695.655 - 11746.068: 98.7997% ( 8) 00:09:36.086 11746.068 - 11796.480: 98.8255% ( 5) 00:09:36.086 11796.480 - 11846.892: 98.8514% ( 5) 00:09:36.086 11846.892 - 11897.305: 98.8773% ( 5) 00:09:36.086 11897.305 - 11947.717: 98.9135% ( 7) 00:09:36.086 11947.717 - 11998.129: 98.9342% ( 4) 00:09:36.086 11998.129 - 12048.542: 98.9601% ( 5) 00:09:36.086 12048.542 - 12098.954: 98.9756% ( 3) 00:09:36.086 12098.954 - 12149.366: 98.9911% ( 3) 00:09:36.086 12149.366 - 12199.778: 99.0118% ( 4) 00:09:36.086 12199.778 - 12250.191: 99.0273% ( 3) 00:09:36.086 12250.191 - 12300.603: 99.0428% ( 3) 00:09:36.086 12300.603 - 12351.015: 99.0635% ( 4) 00:09:36.086 12351.015 - 12401.428: 99.0739% ( 2) 00:09:36.086 12401.428 - 12451.840: 99.0946% ( 4) 00:09:36.087 12451.840 - 12502.252: 99.1153% ( 4) 00:09:36.087 12502.252 - 12552.665: 99.1308% ( 3) 00:09:36.087 12552.665 - 12603.077: 99.1463% ( 3) 00:09:36.087 12603.077 - 12653.489: 99.1670% ( 4) 00:09:36.087 12653.489 - 12703.902: 99.1774% ( 2) 00:09:36.087 12703.902 - 12754.314: 99.1981% ( 4) 00:09:36.087 12754.314 - 12804.726: 99.2084% ( 2) 00:09:36.087 12804.726 - 12855.138: 99.2291% ( 4) 00:09:36.087 12855.138 - 12905.551: 99.2446% ( 3) 00:09:36.087 12905.551 - 13006.375: 99.2860% ( 8) 00:09:36.087 13006.375 - 13107.200: 99.3119% ( 5) 00:09:36.087 13107.200 - 13208.025: 99.3377% ( 5) 00:09:36.087 25004.505 - 25105.329: 99.3533% ( 3) 00:09:36.087 25105.329 - 25206.154: 99.3740% ( 4) 00:09:36.087 25206.154 - 25306.978: 99.3947% ( 4) 00:09:36.087 25306.978 - 25407.803: 99.4154% ( 4) 00:09:36.087 25407.803 - 25508.628: 99.4309% ( 3) 00:09:36.087 25508.628 - 25609.452: 99.4516% ( 4) 00:09:36.087 25609.452 - 25710.277: 99.4723% ( 4) 00:09:36.087 25710.277 - 25811.102: 99.4930% ( 4) 00:09:36.087 25811.102 - 26012.751: 99.5292% ( 7) 00:09:36.087 26012.751 - 26214.400: 99.5602% ( 6) 00:09:36.087 26214.400 - 26416.049: 99.6016% ( 8) 00:09:36.087 26416.049 - 26617.698: 99.6430% ( 8) 00:09:36.087 26617.698 - 26819.348: 99.6844% ( 8) 00:09:36.087 26819.348 - 27020.997: 99.7258% ( 8) 00:09:36.087 27020.997 - 27222.646: 99.7620% ( 7) 00:09:36.087 27222.646 - 27424.295: 99.8034% ( 8) 00:09:36.087 27424.295 - 27625.945: 99.8448% ( 8) 00:09:36.087 27625.945 - 27827.594: 99.8862% ( 8) 00:09:36.087 27827.594 - 28029.243: 99.9276% ( 8) 00:09:36.087 28029.243 - 28230.892: 99.9638% ( 7) 00:09:36.087 28230.892 - 28432.542: 100.0000% ( 7) 00:09:36.087 00:09:36.087 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:36.087 ============================================================================== 00:09:36.087 Range in us Cumulative IO count 00:09:36.087 5066.437 - 5091.643: 0.0052% ( 1) 00:09:36.087 5091.643 - 5116.849: 0.0155% ( 2) 00:09:36.087 5116.849 - 5142.055: 0.0259% ( 2) 00:09:36.087 5142.055 - 5167.262: 0.0517% ( 5) 00:09:36.087 5167.262 - 5192.468: 0.0983% ( 9) 00:09:36.087 5192.468 - 5217.674: 0.1863% ( 17) 00:09:36.087 5217.674 - 5242.880: 0.2535% ( 13) 00:09:36.087 5242.880 - 5268.086: 0.5329% ( 54) 00:09:36.087 5268.086 - 5293.292: 0.9882% ( 88) 00:09:36.087 5293.292 - 5318.498: 1.5159% ( 102) 00:09:36.087 5318.498 - 5343.705: 2.1471% ( 122) 00:09:36.087 5343.705 - 5368.911: 2.9905% ( 163) 00:09:36.087 5368.911 - 5394.117: 3.6786% ( 133) 00:09:36.087 5394.117 - 5419.323: 4.4495% ( 149) 00:09:36.087 5419.323 - 5444.529: 5.2825% ( 161) 00:09:36.087 5444.529 - 5469.735: 6.2448% ( 186) 00:09:36.087 5469.735 - 5494.942: 7.3055% ( 205) 00:09:36.087 5494.942 - 5520.148: 8.3247% ( 197) 00:09:36.087 5520.148 - 5545.354: 9.4319% ( 214) 00:09:36.087 5545.354 - 5570.560: 10.5650% ( 219) 00:09:36.087 5570.560 - 5595.766: 11.7188% ( 223) 00:09:36.087 5595.766 - 5620.972: 12.8932% ( 227) 00:09:36.087 5620.972 - 5646.178: 14.0728% ( 228) 00:09:36.087 5646.178 - 5671.385: 15.3094% ( 239) 00:09:36.087 5671.385 - 5696.591: 16.5718% ( 244) 00:09:36.087 5696.591 - 5721.797: 17.8084% ( 239) 00:09:36.087 5721.797 - 5747.003: 19.0346% ( 237) 00:09:36.087 5747.003 - 5772.209: 20.2815% ( 241) 00:09:36.087 5772.209 - 5797.415: 21.5128% ( 238) 00:09:36.087 5797.415 - 5822.622: 22.7649% ( 242) 00:09:36.087 5822.622 - 5847.828: 24.0221% ( 243) 00:09:36.087 5847.828 - 5873.034: 25.2742% ( 242) 00:09:36.087 5873.034 - 5898.240: 26.5315% ( 243) 00:09:36.087 5898.240 - 5923.446: 27.8042% ( 246) 00:09:36.087 5923.446 - 5948.652: 29.1236% ( 255) 00:09:36.087 5948.652 - 5973.858: 30.4325% ( 253) 00:09:36.087 5973.858 - 5999.065: 31.6898% ( 243) 00:09:36.087 5999.065 - 6024.271: 33.0401% ( 261) 00:09:36.087 6024.271 - 6049.477: 34.3491% ( 253) 00:09:36.087 6049.477 - 6074.683: 35.6478% ( 251) 00:09:36.087 6074.683 - 6099.889: 36.9050% ( 243) 00:09:36.087 6099.889 - 6125.095: 38.2192% ( 254) 00:09:36.087 6125.095 - 6150.302: 39.5230% ( 252) 00:09:36.087 6150.302 - 6175.508: 40.8268% ( 252) 00:09:36.087 6175.508 - 6200.714: 42.1565% ( 257) 00:09:36.087 6200.714 - 6225.920: 43.4603% ( 252) 00:09:36.087 6225.920 - 6251.126: 44.7486% ( 249) 00:09:36.087 6251.126 - 6276.332: 46.0161% ( 245) 00:09:36.087 6276.332 - 6301.538: 47.3303% ( 254) 00:09:36.087 6301.538 - 6326.745: 48.5979% ( 245) 00:09:36.087 6326.745 - 6351.951: 49.9224% ( 256) 00:09:36.087 6351.951 - 6377.157: 51.1848% ( 244) 00:09:36.087 6377.157 - 6402.363: 52.5197% ( 258) 00:09:36.087 6402.363 - 6427.569: 53.8390% ( 255) 00:09:36.087 6427.569 - 6452.775: 55.1635% ( 256) 00:09:36.087 6452.775 - 6503.188: 57.8384% ( 517) 00:09:36.087 6503.188 - 6553.600: 60.4925% ( 513) 00:09:36.087 6553.600 - 6604.012: 63.1312% ( 510) 00:09:36.087 6604.012 - 6654.425: 65.8268% ( 521) 00:09:36.087 6654.425 - 6704.837: 68.4758% ( 512) 00:09:36.087 6704.837 - 6755.249: 71.1610% ( 519) 00:09:36.087 6755.249 - 6805.662: 73.8618% ( 522) 00:09:36.087 6805.662 - 6856.074: 76.4849% ( 507) 00:09:36.087 6856.074 - 6906.486: 79.1391% ( 513) 00:09:36.087 6906.486 - 6956.898: 81.5501% ( 466) 00:09:36.087 6956.898 - 7007.311: 83.6248% ( 401) 00:09:36.087 7007.311 - 7057.723: 85.3218% ( 328) 00:09:36.087 7057.723 - 7108.135: 86.6774% ( 262) 00:09:36.087 7108.135 - 7158.548: 87.7121% ( 200) 00:09:36.087 7158.548 - 7208.960: 88.4468% ( 142) 00:09:36.087 7208.960 - 7259.372: 88.9487% ( 97) 00:09:36.087 7259.372 - 7309.785: 89.3419% ( 76) 00:09:36.087 7309.785 - 7360.197: 89.6420% ( 58) 00:09:36.087 7360.197 - 7410.609: 89.8903% ( 48) 00:09:36.087 7410.609 - 7461.022: 90.1231% ( 45) 00:09:36.087 7461.022 - 7511.434: 90.3560% ( 45) 00:09:36.087 7511.434 - 7561.846: 90.5526% ( 38) 00:09:36.087 7561.846 - 7612.258: 90.7595% ( 40) 00:09:36.087 7612.258 - 7662.671: 90.9406% ( 35) 00:09:36.087 7662.671 - 7713.083: 91.0855% ( 28) 00:09:36.087 7713.083 - 7763.495: 91.2407% ( 30) 00:09:36.087 7763.495 - 7813.908: 91.3442% ( 20) 00:09:36.087 7813.908 - 7864.320: 91.4321% ( 17) 00:09:36.087 7864.320 - 7914.732: 91.5046% ( 14) 00:09:36.087 7914.732 - 7965.145: 91.5977% ( 18) 00:09:36.087 7965.145 - 8015.557: 91.7012% ( 20) 00:09:36.087 8015.557 - 8065.969: 91.7891% ( 17) 00:09:36.087 8065.969 - 8116.382: 91.8719% ( 16) 00:09:36.087 8116.382 - 8166.794: 91.9443% ( 14) 00:09:36.087 8166.794 - 8217.206: 92.0375% ( 18) 00:09:36.087 8217.206 - 8267.618: 92.1202% ( 16) 00:09:36.087 8267.618 - 8318.031: 92.1875% ( 13) 00:09:36.087 8318.031 - 8368.443: 92.2755% ( 17) 00:09:36.087 8368.443 - 8418.855: 92.3634% ( 17) 00:09:36.087 8418.855 - 8469.268: 92.4462% ( 16) 00:09:36.087 8469.268 - 8519.680: 92.5445% ( 19) 00:09:36.087 8519.680 - 8570.092: 92.6428% ( 19) 00:09:36.087 8570.092 - 8620.505: 92.7308% ( 17) 00:09:36.087 8620.505 - 8670.917: 92.8135% ( 16) 00:09:36.087 8670.917 - 8721.329: 92.9067% ( 18) 00:09:36.087 8721.329 - 8771.742: 93.0050% ( 19) 00:09:36.087 8771.742 - 8822.154: 93.0981% ( 18) 00:09:36.087 8822.154 - 8872.566: 93.2223% ( 24) 00:09:36.087 8872.566 - 8922.978: 93.3361% ( 22) 00:09:36.087 8922.978 - 8973.391: 93.4603% ( 24) 00:09:36.087 8973.391 - 9023.803: 93.5793% ( 23) 00:09:36.087 9023.803 - 9074.215: 93.7190% ( 27) 00:09:36.087 9074.215 - 9124.628: 93.8638% ( 28) 00:09:36.087 9124.628 - 9175.040: 94.0139% ( 29) 00:09:36.087 9175.040 - 9225.452: 94.1743% ( 31) 00:09:36.087 9225.452 - 9275.865: 94.3191% ( 28) 00:09:36.087 9275.865 - 9326.277: 94.4536% ( 26) 00:09:36.087 9326.277 - 9376.689: 94.6037% ( 29) 00:09:36.087 9376.689 - 9427.102: 94.7434% ( 27) 00:09:36.087 9427.102 - 9477.514: 94.8934% ( 29) 00:09:36.087 9477.514 - 9527.926: 95.0486% ( 30) 00:09:36.087 9527.926 - 9578.338: 95.2142% ( 32) 00:09:36.087 9578.338 - 9628.751: 95.3746% ( 31) 00:09:36.087 9628.751 - 9679.163: 95.5246% ( 29) 00:09:36.087 9679.163 - 9729.575: 95.6850% ( 31) 00:09:36.087 9729.575 - 9779.988: 95.8351% ( 29) 00:09:36.087 9779.988 - 9830.400: 96.0058% ( 33) 00:09:36.087 9830.400 - 9880.812: 96.1662% ( 31) 00:09:36.087 9880.812 - 9931.225: 96.3266% ( 31) 00:09:36.087 9931.225 - 9981.637: 96.4870% ( 31) 00:09:36.087 9981.637 - 10032.049: 96.6267% ( 27) 00:09:36.087 10032.049 - 10082.462: 96.7612% ( 26) 00:09:36.087 10082.462 - 10132.874: 96.8905% ( 25) 00:09:36.087 10132.874 - 10183.286: 97.0302% ( 27) 00:09:36.087 10183.286 - 10233.698: 97.1699% ( 27) 00:09:36.087 10233.698 - 10284.111: 97.3044% ( 26) 00:09:36.087 10284.111 - 10334.523: 97.4545% ( 29) 00:09:36.087 10334.523 - 10384.935: 97.5890% ( 26) 00:09:36.087 10384.935 - 10435.348: 97.7390% ( 29) 00:09:36.087 10435.348 - 10485.760: 97.8787% ( 27) 00:09:36.087 10485.760 - 10536.172: 98.0288% ( 29) 00:09:36.087 10536.172 - 10586.585: 98.1581% ( 25) 00:09:36.087 10586.585 - 10636.997: 98.2875% ( 25) 00:09:36.087 10636.997 - 10687.409: 98.4116% ( 24) 00:09:36.087 10687.409 - 10737.822: 98.5048% ( 18) 00:09:36.087 10737.822 - 10788.234: 98.6031% ( 19) 00:09:36.087 10788.234 - 10838.646: 98.6858% ( 16) 00:09:36.087 10838.646 - 10889.058: 98.7479% ( 12) 00:09:36.087 10889.058 - 10939.471: 98.8152% ( 13) 00:09:36.087 10939.471 - 10989.883: 98.8618% ( 9) 00:09:36.087 10989.883 - 11040.295: 98.9031% ( 8) 00:09:36.087 11040.295 - 11090.708: 98.9445% ( 8) 00:09:36.087 11090.708 - 11141.120: 98.9756% ( 6) 00:09:36.087 11141.120 - 11191.532: 98.9963% ( 4) 00:09:36.087 11191.532 - 11241.945: 99.0170% ( 4) 00:09:36.087 11241.945 - 11292.357: 99.0377% ( 4) 00:09:36.087 11292.357 - 11342.769: 99.0532% ( 3) 00:09:36.087 11342.769 - 11393.182: 99.0739% ( 4) 00:09:36.087 11393.182 - 11443.594: 99.0946% ( 4) 00:09:36.087 11443.594 - 11494.006: 99.1153% ( 4) 00:09:36.087 11494.006 - 11544.418: 99.1360% ( 4) 00:09:36.087 11544.418 - 11594.831: 99.1515% ( 3) 00:09:36.087 11594.831 - 11645.243: 99.1722% ( 4) 00:09:36.087 11645.243 - 11695.655: 99.1877% ( 3) 00:09:36.088 11695.655 - 11746.068: 99.1981% ( 2) 00:09:36.088 11746.068 - 11796.480: 99.2084% ( 2) 00:09:36.088 11796.480 - 11846.892: 99.2188% ( 2) 00:09:36.088 11846.892 - 11897.305: 99.2291% ( 2) 00:09:36.088 11897.305 - 11947.717: 99.2394% ( 2) 00:09:36.088 11947.717 - 11998.129: 99.2498% ( 2) 00:09:36.088 11998.129 - 12048.542: 99.2601% ( 2) 00:09:36.088 12048.542 - 12098.954: 99.2705% ( 2) 00:09:36.088 12098.954 - 12149.366: 99.2808% ( 2) 00:09:36.088 12149.366 - 12199.778: 99.2912% ( 2) 00:09:36.088 12199.778 - 12250.191: 99.3015% ( 2) 00:09:36.088 12250.191 - 12300.603: 99.3119% ( 2) 00:09:36.088 12300.603 - 12351.015: 99.3274% ( 3) 00:09:36.088 12351.015 - 12401.428: 99.3377% ( 2) 00:09:36.088 23794.609 - 23895.434: 99.3429% ( 1) 00:09:36.088 23895.434 - 23996.258: 99.3636% ( 4) 00:09:36.088 23996.258 - 24097.083: 99.3843% ( 4) 00:09:36.088 24097.083 - 24197.908: 99.4050% ( 4) 00:09:36.088 24197.908 - 24298.732: 99.4257% ( 4) 00:09:36.088 24298.732 - 24399.557: 99.4464% ( 4) 00:09:36.088 24399.557 - 24500.382: 99.4723% ( 5) 00:09:36.088 24500.382 - 24601.206: 99.4930% ( 4) 00:09:36.088 24601.206 - 24702.031: 99.5137% ( 4) 00:09:36.088 24702.031 - 24802.855: 99.5344% ( 4) 00:09:36.088 24802.855 - 24903.680: 99.5550% ( 4) 00:09:36.088 24903.680 - 25004.505: 99.5809% ( 5) 00:09:36.088 25004.505 - 25105.329: 99.6016% ( 4) 00:09:36.088 25105.329 - 25206.154: 99.6223% ( 4) 00:09:36.088 25206.154 - 25306.978: 99.6430% ( 4) 00:09:36.088 25306.978 - 25407.803: 99.6637% ( 4) 00:09:36.088 25407.803 - 25508.628: 99.6844% ( 4) 00:09:36.088 25508.628 - 25609.452: 99.7051% ( 4) 00:09:36.088 25609.452 - 25710.277: 99.7258% ( 4) 00:09:36.088 25710.277 - 25811.102: 99.7465% ( 4) 00:09:36.088 25811.102 - 26012.751: 99.7930% ( 9) 00:09:36.088 26012.751 - 26214.400: 99.8344% ( 8) 00:09:36.088 26214.400 - 26416.049: 99.8758% ( 8) 00:09:36.088 26416.049 - 26617.698: 99.9224% ( 9) 00:09:36.088 26617.698 - 26819.348: 99.9638% ( 8) 00:09:36.088 26819.348 - 27020.997: 100.0000% ( 7) 00:09:36.088 00:09:36.088 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:36.088 ============================================================================== 00:09:36.088 Range in us Cumulative IO count 00:09:36.088 5142.055 - 5167.262: 0.0207% ( 4) 00:09:36.088 5167.262 - 5192.468: 0.0517% ( 6) 00:09:36.088 5192.468 - 5217.674: 0.0828% ( 6) 00:09:36.088 5217.674 - 5242.880: 0.1966% ( 22) 00:09:36.088 5242.880 - 5268.086: 0.4760% ( 54) 00:09:36.088 5268.086 - 5293.292: 0.8589% ( 74) 00:09:36.088 5293.292 - 5318.498: 1.4487% ( 114) 00:09:36.088 5318.498 - 5343.705: 2.0954% ( 125) 00:09:36.088 5343.705 - 5368.911: 2.8974% ( 155) 00:09:36.088 5368.911 - 5394.117: 3.8390% ( 182) 00:09:36.088 5394.117 - 5419.323: 4.7910% ( 184) 00:09:36.088 5419.323 - 5444.529: 5.7274% ( 181) 00:09:36.088 5444.529 - 5469.735: 6.5294% ( 155) 00:09:36.088 5469.735 - 5494.942: 7.3986% ( 168) 00:09:36.088 5494.942 - 5520.148: 8.3195% ( 178) 00:09:36.088 5520.148 - 5545.354: 9.3647% ( 202) 00:09:36.088 5545.354 - 5570.560: 10.5546% ( 230) 00:09:36.088 5570.560 - 5595.766: 11.7032% ( 222) 00:09:36.088 5595.766 - 5620.972: 12.9139% ( 234) 00:09:36.088 5620.972 - 5646.178: 14.1556% ( 240) 00:09:36.088 5646.178 - 5671.385: 15.3611% ( 233) 00:09:36.088 5671.385 - 5696.591: 16.5770% ( 235) 00:09:36.088 5696.591 - 5721.797: 17.7670% ( 230) 00:09:36.088 5721.797 - 5747.003: 19.0035% ( 239) 00:09:36.088 5747.003 - 5772.209: 20.2866% ( 248) 00:09:36.088 5772.209 - 5797.415: 21.6163% ( 257) 00:09:36.088 5797.415 - 5822.622: 22.9305% ( 254) 00:09:36.088 5822.622 - 5847.828: 24.2291% ( 251) 00:09:36.088 5847.828 - 5873.034: 25.4863% ( 243) 00:09:36.088 5873.034 - 5898.240: 26.8005% ( 254) 00:09:36.088 5898.240 - 5923.446: 28.0733% ( 246) 00:09:36.088 5923.446 - 5948.652: 29.2943% ( 236) 00:09:36.088 5948.652 - 5973.858: 30.6188% ( 256) 00:09:36.088 5973.858 - 5999.065: 31.8916% ( 246) 00:09:36.088 5999.065 - 6024.271: 33.1436% ( 242) 00:09:36.088 6024.271 - 6049.477: 34.4423% ( 251) 00:09:36.088 6049.477 - 6074.683: 35.7461% ( 252) 00:09:36.088 6074.683 - 6099.889: 37.0706% ( 256) 00:09:36.088 6099.889 - 6125.095: 38.3847% ( 254) 00:09:36.088 6125.095 - 6150.302: 39.6730% ( 249) 00:09:36.088 6150.302 - 6175.508: 40.9510% ( 247) 00:09:36.088 6175.508 - 6200.714: 42.2237% ( 246) 00:09:36.088 6200.714 - 6225.920: 43.5379% ( 254) 00:09:36.088 6225.920 - 6251.126: 44.8727% ( 258) 00:09:36.088 6251.126 - 6276.332: 46.1455% ( 246) 00:09:36.088 6276.332 - 6301.538: 47.4907% ( 260) 00:09:36.088 6301.538 - 6326.745: 48.8048% ( 254) 00:09:36.088 6326.745 - 6351.951: 50.1087% ( 252) 00:09:36.088 6351.951 - 6377.157: 51.3866% ( 247) 00:09:36.088 6377.157 - 6402.363: 52.6852% ( 251) 00:09:36.088 6402.363 - 6427.569: 53.9994% ( 254) 00:09:36.088 6427.569 - 6452.775: 55.2773% ( 247) 00:09:36.088 6452.775 - 6503.188: 57.9625% ( 519) 00:09:36.088 6503.188 - 6553.600: 60.5650% ( 503) 00:09:36.088 6553.600 - 6604.012: 63.2347% ( 516) 00:09:36.088 6604.012 - 6654.425: 65.8733% ( 510) 00:09:36.088 6654.425 - 6704.837: 68.4861% ( 505) 00:09:36.088 6704.837 - 6755.249: 71.1507% ( 515) 00:09:36.088 6755.249 - 6805.662: 73.7893% ( 510) 00:09:36.088 6805.662 - 6856.074: 76.4228% ( 509) 00:09:36.088 6856.074 - 6906.486: 79.0252% ( 503) 00:09:36.088 6906.486 - 6956.898: 81.3949% ( 458) 00:09:36.088 6956.898 - 7007.311: 83.3609% ( 380) 00:09:36.088 7007.311 - 7057.723: 84.9234% ( 302) 00:09:36.088 7057.723 - 7108.135: 86.1186% ( 231) 00:09:36.088 7108.135 - 7158.548: 87.1068% ( 191) 00:09:36.088 7158.548 - 7208.960: 87.8829% ( 150) 00:09:36.088 7208.960 - 7259.372: 88.4468% ( 109) 00:09:36.088 7259.372 - 7309.785: 88.9073% ( 89) 00:09:36.088 7309.785 - 7360.197: 89.2901% ( 74) 00:09:36.088 7360.197 - 7410.609: 89.6316% ( 66) 00:09:36.088 7410.609 - 7461.022: 89.9214% ( 56) 00:09:36.088 7461.022 - 7511.434: 90.1024% ( 35) 00:09:36.088 7511.434 - 7561.846: 90.2525% ( 29) 00:09:36.088 7561.846 - 7612.258: 90.4077% ( 30) 00:09:36.088 7612.258 - 7662.671: 90.5422% ( 26) 00:09:36.088 7662.671 - 7713.083: 90.6767% ( 26) 00:09:36.088 7713.083 - 7763.495: 90.8268% ( 29) 00:09:36.088 7763.495 - 7813.908: 90.9665% ( 27) 00:09:36.088 7813.908 - 7864.320: 91.0958% ( 25) 00:09:36.088 7864.320 - 7914.732: 91.2096% ( 22) 00:09:36.088 7914.732 - 7965.145: 91.3442% ( 26) 00:09:36.088 7965.145 - 8015.557: 91.4942% ( 29) 00:09:36.088 8015.557 - 8065.969: 91.6649% ( 33) 00:09:36.088 8065.969 - 8116.382: 91.7995% ( 26) 00:09:36.088 8116.382 - 8166.794: 91.9443% ( 28) 00:09:36.088 8166.794 - 8217.206: 92.0944% ( 29) 00:09:36.088 8217.206 - 8267.618: 92.2444% ( 29) 00:09:36.088 8267.618 - 8318.031: 92.4048% ( 31) 00:09:36.088 8318.031 - 8368.443: 92.5652% ( 31) 00:09:36.088 8368.443 - 8418.855: 92.7101% ( 28) 00:09:36.088 8418.855 - 8469.268: 92.8498% ( 27) 00:09:36.088 8469.268 - 8519.680: 92.9946% ( 28) 00:09:36.088 8519.680 - 8570.092: 93.1291% ( 26) 00:09:36.088 8570.092 - 8620.505: 93.2792% ( 29) 00:09:36.088 8620.505 - 8670.917: 93.4240% ( 28) 00:09:36.088 8670.917 - 8721.329: 93.5637% ( 27) 00:09:36.088 8721.329 - 8771.742: 93.7138% ( 29) 00:09:36.088 8771.742 - 8822.154: 93.8483% ( 26) 00:09:36.088 8822.154 - 8872.566: 94.0035% ( 30) 00:09:36.088 8872.566 - 8922.978: 94.1536% ( 29) 00:09:36.088 8922.978 - 8973.391: 94.3036% ( 29) 00:09:36.088 8973.391 - 9023.803: 94.4329% ( 25) 00:09:36.088 9023.803 - 9074.215: 94.5623% ( 25) 00:09:36.088 9074.215 - 9124.628: 94.7072% ( 28) 00:09:36.088 9124.628 - 9175.040: 94.8365% ( 25) 00:09:36.088 9175.040 - 9225.452: 95.0228% ( 36) 00:09:36.088 9225.452 - 9275.865: 95.1469% ( 24) 00:09:36.088 9275.865 - 9326.277: 95.2970% ( 29) 00:09:36.088 9326.277 - 9376.689: 95.4367% ( 27) 00:09:36.088 9376.689 - 9427.102: 95.5764% ( 27) 00:09:36.088 9427.102 - 9477.514: 95.7005% ( 24) 00:09:36.088 9477.514 - 9527.926: 95.8247% ( 24) 00:09:36.088 9527.926 - 9578.338: 95.9437% ( 23) 00:09:36.088 9578.338 - 9628.751: 96.0679% ( 24) 00:09:36.088 9628.751 - 9679.163: 96.1714% ( 20) 00:09:36.088 9679.163 - 9729.575: 96.2852% ( 22) 00:09:36.088 9729.575 - 9779.988: 96.3887% ( 20) 00:09:36.088 9779.988 - 9830.400: 96.4973% ( 21) 00:09:36.088 9830.400 - 9880.812: 96.6008% ( 20) 00:09:36.088 9880.812 - 9931.225: 96.6887% ( 17) 00:09:36.088 9931.225 - 9981.637: 96.7560% ( 13) 00:09:36.088 9981.637 - 10032.049: 96.8284% ( 14) 00:09:36.088 10032.049 - 10082.462: 96.8957% ( 13) 00:09:36.088 10082.462 - 10132.874: 96.9681% ( 14) 00:09:36.088 10132.874 - 10183.286: 97.0457% ( 15) 00:09:36.088 10183.286 - 10233.698: 97.1440% ( 19) 00:09:36.088 10233.698 - 10284.111: 97.2268% ( 16) 00:09:36.088 10284.111 - 10334.523: 97.3096% ( 16) 00:09:36.088 10334.523 - 10384.935: 97.3924% ( 16) 00:09:36.088 10384.935 - 10435.348: 97.4907% ( 19) 00:09:36.088 10435.348 - 10485.760: 97.5631% ( 14) 00:09:36.088 10485.760 - 10536.172: 97.6407% ( 15) 00:09:36.088 10536.172 - 10586.585: 97.7132% ( 14) 00:09:36.088 10586.585 - 10636.997: 97.7804% ( 13) 00:09:36.088 10636.997 - 10687.409: 97.8529% ( 14) 00:09:36.088 10687.409 - 10737.822: 97.9305% ( 15) 00:09:36.088 10737.822 - 10788.234: 98.0132% ( 16) 00:09:36.088 10788.234 - 10838.646: 98.0960% ( 16) 00:09:36.088 10838.646 - 10889.058: 98.1633% ( 13) 00:09:36.088 10889.058 - 10939.471: 98.2357% ( 14) 00:09:36.088 10939.471 - 10989.883: 98.3030% ( 13) 00:09:36.088 10989.883 - 11040.295: 98.3495% ( 9) 00:09:36.088 11040.295 - 11090.708: 98.4065% ( 11) 00:09:36.088 11090.708 - 11141.120: 98.4530% ( 9) 00:09:36.088 11141.120 - 11191.532: 98.5099% ( 11) 00:09:36.088 11191.532 - 11241.945: 98.5617% ( 10) 00:09:36.088 11241.945 - 11292.357: 98.6082% ( 9) 00:09:36.088 11292.357 - 11342.769: 98.6600% ( 10) 00:09:36.088 11342.769 - 11393.182: 98.7117% ( 10) 00:09:36.088 11393.182 - 11443.594: 98.7635% ( 10) 00:09:36.089 11443.594 - 11494.006: 98.7997% ( 7) 00:09:36.089 11494.006 - 11544.418: 98.8411% ( 8) 00:09:36.089 11544.418 - 11594.831: 98.8773% ( 7) 00:09:36.089 11594.831 - 11645.243: 98.9135% ( 7) 00:09:36.089 11645.243 - 11695.655: 98.9549% ( 8) 00:09:36.089 11695.655 - 11746.068: 98.9911% ( 7) 00:09:36.089 11746.068 - 11796.480: 99.0325% ( 8) 00:09:36.089 11796.480 - 11846.892: 99.0687% ( 7) 00:09:36.089 11846.892 - 11897.305: 99.1101% ( 8) 00:09:36.089 11897.305 - 11947.717: 99.1463% ( 7) 00:09:36.089 11947.717 - 11998.129: 99.1825% ( 7) 00:09:36.089 11998.129 - 12048.542: 99.2239% ( 8) 00:09:36.089 12048.542 - 12098.954: 99.2601% ( 7) 00:09:36.089 12098.954 - 12149.366: 99.2808% ( 4) 00:09:36.089 12149.366 - 12199.778: 99.2964% ( 3) 00:09:36.089 12199.778 - 12250.191: 99.3171% ( 4) 00:09:36.089 12250.191 - 12300.603: 99.3326% ( 3) 00:09:36.089 12300.603 - 12351.015: 99.3377% ( 1) 00:09:36.089 23290.486 - 23391.311: 99.3429% ( 1) 00:09:36.089 23391.311 - 23492.135: 99.3636% ( 4) 00:09:36.089 23492.135 - 23592.960: 99.3843% ( 4) 00:09:36.089 23592.960 - 23693.785: 99.4050% ( 4) 00:09:36.089 23693.785 - 23794.609: 99.4257% ( 4) 00:09:36.089 23794.609 - 23895.434: 99.4464% ( 4) 00:09:36.089 23895.434 - 23996.258: 99.4671% ( 4) 00:09:36.089 23996.258 - 24097.083: 99.4878% ( 4) 00:09:36.089 24097.083 - 24197.908: 99.5137% ( 5) 00:09:36.089 24197.908 - 24298.732: 99.5344% ( 4) 00:09:36.089 24298.732 - 24399.557: 99.5550% ( 4) 00:09:36.089 24399.557 - 24500.382: 99.5757% ( 4) 00:09:36.089 24500.382 - 24601.206: 99.5964% ( 4) 00:09:36.089 24601.206 - 24702.031: 99.6171% ( 4) 00:09:36.089 24702.031 - 24802.855: 99.6430% ( 5) 00:09:36.089 24802.855 - 24903.680: 99.6585% ( 3) 00:09:36.089 24903.680 - 25004.505: 99.6792% ( 4) 00:09:36.089 25004.505 - 25105.329: 99.6999% ( 4) 00:09:36.089 25105.329 - 25206.154: 99.7258% ( 5) 00:09:36.089 25206.154 - 25306.978: 99.7465% ( 4) 00:09:36.089 25306.978 - 25407.803: 99.7672% ( 4) 00:09:36.089 25407.803 - 25508.628: 99.7879% ( 4) 00:09:36.089 25508.628 - 25609.452: 99.8137% ( 5) 00:09:36.089 25609.452 - 25710.277: 99.8344% ( 4) 00:09:36.089 25710.277 - 25811.102: 99.8551% ( 4) 00:09:36.089 25811.102 - 26012.751: 99.8965% ( 8) 00:09:36.089 26012.751 - 26214.400: 99.9431% ( 9) 00:09:36.089 26214.400 - 26416.049: 99.9845% ( 8) 00:09:36.089 26416.049 - 26617.698: 100.0000% ( 3) 00:09:36.089 00:09:36.089 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:36.089 ============================================================================== 00:09:36.089 Range in us Cumulative IO count 00:09:36.089 5167.262 - 5192.468: 0.0103% ( 2) 00:09:36.089 5192.468 - 5217.674: 0.0621% ( 10) 00:09:36.089 5217.674 - 5242.880: 0.1707% ( 21) 00:09:36.089 5242.880 - 5268.086: 0.5019% ( 64) 00:09:36.089 5268.086 - 5293.292: 0.8330% ( 64) 00:09:36.089 5293.292 - 5318.498: 1.2417% ( 79) 00:09:36.089 5318.498 - 5343.705: 1.6815% ( 85) 00:09:36.089 5343.705 - 5368.911: 2.4783% ( 154) 00:09:36.089 5368.911 - 5394.117: 3.3164% ( 162) 00:09:36.089 5394.117 - 5419.323: 4.2632% ( 183) 00:09:36.089 5419.323 - 5444.529: 5.2773% ( 196) 00:09:36.089 5444.529 - 5469.735: 6.1931% ( 177) 00:09:36.089 5469.735 - 5494.942: 7.1296% ( 181) 00:09:36.089 5494.942 - 5520.148: 7.9988% ( 168) 00:09:36.089 5520.148 - 5545.354: 9.0128% ( 196) 00:09:36.089 5545.354 - 5570.560: 10.0993% ( 210) 00:09:36.089 5570.560 - 5595.766: 11.2014% ( 213) 00:09:36.089 5595.766 - 5620.972: 12.4017% ( 232) 00:09:36.089 5620.972 - 5646.178: 13.6434% ( 240) 00:09:36.089 5646.178 - 5671.385: 14.8851% ( 240) 00:09:36.089 5671.385 - 5696.591: 16.1269% ( 240) 00:09:36.089 5696.591 - 5721.797: 17.3738% ( 241) 00:09:36.089 5721.797 - 5747.003: 18.6000% ( 237) 00:09:36.089 5747.003 - 5772.209: 19.8934% ( 250) 00:09:36.089 5772.209 - 5797.415: 21.1765% ( 248) 00:09:36.089 5797.415 - 5822.622: 22.5114% ( 258) 00:09:36.089 5822.622 - 5847.828: 23.8255% ( 254) 00:09:36.089 5847.828 - 5873.034: 25.1707% ( 260) 00:09:36.089 5873.034 - 5898.240: 26.4745% ( 252) 00:09:36.089 5898.240 - 5923.446: 27.7628% ( 249) 00:09:36.089 5923.446 - 5948.652: 29.0615% ( 251) 00:09:36.089 5948.652 - 5973.858: 30.3342% ( 246) 00:09:36.089 5973.858 - 5999.065: 31.6380% ( 252) 00:09:36.089 5999.065 - 6024.271: 32.9367% ( 251) 00:09:36.089 6024.271 - 6049.477: 34.2922% ( 262) 00:09:36.089 6049.477 - 6074.683: 35.5857% ( 250) 00:09:36.089 6074.683 - 6099.889: 36.8688% ( 248) 00:09:36.089 6099.889 - 6125.095: 38.1778% ( 253) 00:09:36.089 6125.095 - 6150.302: 39.4816% ( 252) 00:09:36.089 6150.302 - 6175.508: 40.7595% ( 247) 00:09:36.089 6175.508 - 6200.714: 42.0582% ( 251) 00:09:36.089 6200.714 - 6225.920: 43.3568% ( 251) 00:09:36.089 6225.920 - 6251.126: 44.6761% ( 255) 00:09:36.089 6251.126 - 6276.332: 46.0110% ( 258) 00:09:36.089 6276.332 - 6301.538: 47.3200% ( 253) 00:09:36.089 6301.538 - 6326.745: 48.6341% ( 254) 00:09:36.089 6326.745 - 6351.951: 49.9276% ( 250) 00:09:36.089 6351.951 - 6377.157: 51.2210% ( 250) 00:09:36.089 6377.157 - 6402.363: 52.5404% ( 255) 00:09:36.089 6402.363 - 6427.569: 53.8390% ( 251) 00:09:36.089 6427.569 - 6452.775: 55.1583% ( 255) 00:09:36.089 6452.775 - 6503.188: 57.7918% ( 509) 00:09:36.089 6503.188 - 6553.600: 60.4046% ( 505) 00:09:36.089 6553.600 - 6604.012: 63.0433% ( 510) 00:09:36.089 6604.012 - 6654.425: 65.6923% ( 512) 00:09:36.089 6654.425 - 6704.837: 68.3827% ( 520) 00:09:36.089 6704.837 - 6755.249: 71.0058% ( 507) 00:09:36.089 6755.249 - 6805.662: 73.6807% ( 517) 00:09:36.089 6805.662 - 6856.074: 76.3659% ( 519) 00:09:36.089 6856.074 - 6906.486: 78.9580% ( 501) 00:09:36.089 6906.486 - 6956.898: 81.2966% ( 452) 00:09:36.089 6956.898 - 7007.311: 83.2988% ( 387) 00:09:36.089 7007.311 - 7057.723: 84.8924% ( 308) 00:09:36.089 7057.723 - 7108.135: 86.0927% ( 232) 00:09:36.089 7108.135 - 7158.548: 87.0706% ( 189) 00:09:36.089 7158.548 - 7208.960: 87.8984% ( 160) 00:09:36.089 7208.960 - 7259.372: 88.4934% ( 115) 00:09:36.089 7259.372 - 7309.785: 88.9538% ( 89) 00:09:36.089 7309.785 - 7360.197: 89.3005% ( 67) 00:09:36.089 7360.197 - 7410.609: 89.6109% ( 60) 00:09:36.089 7410.609 - 7461.022: 89.8438% ( 45) 00:09:36.089 7461.022 - 7511.434: 90.0352% ( 37) 00:09:36.089 7511.434 - 7561.846: 90.2007% ( 32) 00:09:36.089 7561.846 - 7612.258: 90.3508% ( 29) 00:09:36.089 7612.258 - 7662.671: 90.4957% ( 28) 00:09:36.089 7662.671 - 7713.083: 90.6405% ( 28) 00:09:36.089 7713.083 - 7763.495: 90.8164% ( 34) 00:09:36.089 7763.495 - 7813.908: 90.9975% ( 35) 00:09:36.089 7813.908 - 7864.320: 91.1683% ( 33) 00:09:36.089 7864.320 - 7914.732: 91.3235% ( 30) 00:09:36.089 7914.732 - 7965.145: 91.5097% ( 36) 00:09:36.089 7965.145 - 8015.557: 91.6753% ( 32) 00:09:36.089 8015.557 - 8065.969: 91.8409% ( 32) 00:09:36.089 8065.969 - 8116.382: 92.0012% ( 31) 00:09:36.089 8116.382 - 8166.794: 92.1513% ( 29) 00:09:36.089 8166.794 - 8217.206: 92.3117% ( 31) 00:09:36.089 8217.206 - 8267.618: 92.4824% ( 33) 00:09:36.089 8267.618 - 8318.031: 92.6635% ( 35) 00:09:36.089 8318.031 - 8368.443: 92.8394% ( 34) 00:09:36.089 8368.443 - 8418.855: 93.0153% ( 34) 00:09:36.089 8418.855 - 8469.268: 93.1964% ( 35) 00:09:36.089 8469.268 - 8519.680: 93.3516% ( 30) 00:09:36.089 8519.680 - 8570.092: 93.5224% ( 33) 00:09:36.089 8570.092 - 8620.505: 93.6879% ( 32) 00:09:36.089 8620.505 - 8670.917: 93.8535% ( 32) 00:09:36.089 8670.917 - 8721.329: 94.0087% ( 30) 00:09:36.089 8721.329 - 8771.742: 94.1743% ( 32) 00:09:36.089 8771.742 - 8822.154: 94.3398% ( 32) 00:09:36.089 8822.154 - 8872.566: 94.5002% ( 31) 00:09:36.089 8872.566 - 8922.978: 94.6502% ( 29) 00:09:36.089 8922.978 - 8973.391: 94.7951% ( 28) 00:09:36.089 8973.391 - 9023.803: 94.9348% ( 27) 00:09:36.089 9023.803 - 9074.215: 95.0642% ( 25) 00:09:36.089 9074.215 - 9124.628: 95.1883% ( 24) 00:09:36.089 9124.628 - 9175.040: 95.3022% ( 22) 00:09:36.089 9175.040 - 9225.452: 95.3953% ( 18) 00:09:36.089 9225.452 - 9275.865: 95.4832% ( 17) 00:09:36.089 9275.865 - 9326.277: 95.5608% ( 15) 00:09:36.089 9326.277 - 9376.689: 95.6385% ( 15) 00:09:36.089 9376.689 - 9427.102: 95.7005% ( 12) 00:09:36.089 9427.102 - 9477.514: 95.7730% ( 14) 00:09:36.090 9477.514 - 9527.926: 95.8454% ( 14) 00:09:36.090 9527.926 - 9578.338: 95.9023% ( 11) 00:09:36.090 9578.338 - 9628.751: 95.9592% ( 11) 00:09:36.090 9628.751 - 9679.163: 96.0213% ( 12) 00:09:36.090 9679.163 - 9729.575: 96.0782% ( 11) 00:09:36.090 9729.575 - 9779.988: 96.1455% ( 13) 00:09:36.090 9779.988 - 9830.400: 96.1972% ( 10) 00:09:36.090 9830.400 - 9880.812: 96.2593% ( 12) 00:09:36.090 9880.812 - 9931.225: 96.3214% ( 12) 00:09:36.090 9931.225 - 9981.637: 96.3835% ( 12) 00:09:36.090 9981.637 - 10032.049: 96.4197% ( 7) 00:09:36.090 10032.049 - 10082.462: 96.4818% ( 12) 00:09:36.090 10082.462 - 10132.874: 96.5490% ( 13) 00:09:36.090 10132.874 - 10183.286: 96.6111% ( 12) 00:09:36.090 10183.286 - 10233.698: 96.6732% ( 12) 00:09:36.090 10233.698 - 10284.111: 96.7508% ( 15) 00:09:36.090 10284.111 - 10334.523: 96.8284% ( 15) 00:09:36.090 10334.523 - 10384.935: 96.9423% ( 22) 00:09:36.090 10384.935 - 10435.348: 97.0406% ( 19) 00:09:36.090 10435.348 - 10485.760: 97.1492% ( 21) 00:09:36.090 10485.760 - 10536.172: 97.2579% ( 21) 00:09:36.090 10536.172 - 10586.585: 97.3562% ( 19) 00:09:36.090 10586.585 - 10636.997: 97.4545% ( 19) 00:09:36.090 10636.997 - 10687.409: 97.5631% ( 21) 00:09:36.090 10687.409 - 10737.822: 97.6511% ( 17) 00:09:36.090 10737.822 - 10788.234: 97.7442% ( 18) 00:09:36.090 10788.234 - 10838.646: 97.8373% ( 18) 00:09:36.090 10838.646 - 10889.058: 97.9253% ( 17) 00:09:36.090 10889.058 - 10939.471: 98.0132% ( 17) 00:09:36.090 10939.471 - 10989.883: 98.1064% ( 18) 00:09:36.090 10989.883 - 11040.295: 98.1995% ( 18) 00:09:36.090 11040.295 - 11090.708: 98.2875% ( 17) 00:09:36.090 11090.708 - 11141.120: 98.3806% ( 18) 00:09:36.090 11141.120 - 11191.532: 98.4685% ( 17) 00:09:36.090 11191.532 - 11241.945: 98.5462% ( 15) 00:09:36.090 11241.945 - 11292.357: 98.6186% ( 14) 00:09:36.090 11292.357 - 11342.769: 98.6755% ( 11) 00:09:36.090 11342.769 - 11393.182: 98.7376% ( 12) 00:09:36.090 11393.182 - 11443.594: 98.7997% ( 12) 00:09:36.090 11443.594 - 11494.006: 98.8618% ( 12) 00:09:36.090 11494.006 - 11544.418: 98.9238% ( 12) 00:09:36.090 11544.418 - 11594.831: 98.9808% ( 11) 00:09:36.090 11594.831 - 11645.243: 99.0377% ( 11) 00:09:36.090 11645.243 - 11695.655: 99.0998% ( 12) 00:09:36.090 11695.655 - 11746.068: 99.1463% ( 9) 00:09:36.090 11746.068 - 11796.480: 99.1825% ( 7) 00:09:36.090 11796.480 - 11846.892: 99.2239% ( 8) 00:09:36.090 11846.892 - 11897.305: 99.2653% ( 8) 00:09:36.090 11897.305 - 11947.717: 99.3067% ( 8) 00:09:36.090 11947.717 - 11998.129: 99.3274% ( 4) 00:09:36.090 11998.129 - 12048.542: 99.3377% ( 2) 00:09:36.090 22080.591 - 22181.415: 99.3584% ( 4) 00:09:36.090 22181.415 - 22282.240: 99.3791% ( 4) 00:09:36.090 22282.240 - 22383.065: 99.3998% ( 4) 00:09:36.090 22383.065 - 22483.889: 99.4205% ( 4) 00:09:36.090 22483.889 - 22584.714: 99.4412% ( 4) 00:09:36.090 22584.714 - 22685.538: 99.4619% ( 4) 00:09:36.090 22685.538 - 22786.363: 99.4826% ( 4) 00:09:36.090 22786.363 - 22887.188: 99.5085% ( 5) 00:09:36.090 22887.188 - 22988.012: 99.5292% ( 4) 00:09:36.090 22988.012 - 23088.837: 99.5499% ( 4) 00:09:36.090 23088.837 - 23189.662: 99.5706% ( 4) 00:09:36.090 23189.662 - 23290.486: 99.5913% ( 4) 00:09:36.090 23290.486 - 23391.311: 99.6171% ( 5) 00:09:36.090 23391.311 - 23492.135: 99.6378% ( 4) 00:09:36.090 23492.135 - 23592.960: 99.6585% ( 4) 00:09:36.090 23592.960 - 23693.785: 99.6792% ( 4) 00:09:36.090 23693.785 - 23794.609: 99.6999% ( 4) 00:09:36.090 23794.609 - 23895.434: 99.7206% ( 4) 00:09:36.090 23895.434 - 23996.258: 99.7413% ( 4) 00:09:36.090 23996.258 - 24097.083: 99.7620% ( 4) 00:09:36.090 24097.083 - 24197.908: 99.7827% ( 4) 00:09:36.090 24197.908 - 24298.732: 99.8034% ( 4) 00:09:36.090 24298.732 - 24399.557: 99.8241% ( 4) 00:09:36.090 24399.557 - 24500.382: 99.8500% ( 5) 00:09:36.090 24500.382 - 24601.206: 99.8707% ( 4) 00:09:36.090 24601.206 - 24702.031: 99.8913% ( 4) 00:09:36.090 24702.031 - 24802.855: 99.9120% ( 4) 00:09:36.090 24802.855 - 24903.680: 99.9327% ( 4) 00:09:36.090 24903.680 - 25004.505: 99.9586% ( 5) 00:09:36.090 25004.505 - 25105.329: 99.9793% ( 4) 00:09:36.090 25105.329 - 25206.154: 100.0000% ( 4) 00:09:36.090 00:09:36.090 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:36.090 ============================================================================== 00:09:36.090 Range in us Cumulative IO count 00:09:36.090 5116.849 - 5142.055: 0.0051% ( 1) 00:09:36.090 5142.055 - 5167.262: 0.0154% ( 2) 00:09:36.090 5167.262 - 5192.468: 0.0257% ( 2) 00:09:36.090 5192.468 - 5217.674: 0.0514% ( 5) 00:09:36.090 5217.674 - 5242.880: 0.1593% ( 21) 00:09:36.090 5242.880 - 5268.086: 0.3958% ( 46) 00:09:36.090 5268.086 - 5293.292: 0.7144% ( 62) 00:09:36.090 5293.292 - 5318.498: 1.1410% ( 83) 00:09:36.090 5318.498 - 5343.705: 1.8812% ( 144) 00:09:36.090 5343.705 - 5368.911: 2.7292% ( 165) 00:09:36.090 5368.911 - 5394.117: 3.5310% ( 156) 00:09:36.090 5394.117 - 5419.323: 4.2917% ( 148) 00:09:36.090 5419.323 - 5444.529: 5.0833% ( 154) 00:09:36.090 5444.529 - 5469.735: 5.8851% ( 156) 00:09:36.090 5469.735 - 5494.942: 6.8308% ( 184) 00:09:36.090 5494.942 - 5520.148: 7.8690% ( 202) 00:09:36.090 5520.148 - 5545.354: 8.9484% ( 210) 00:09:36.090 5545.354 - 5570.560: 10.1717% ( 238) 00:09:36.090 5570.560 - 5595.766: 11.3538% ( 230) 00:09:36.090 5595.766 - 5620.972: 12.5617% ( 235) 00:09:36.090 5620.972 - 5646.178: 13.7747% ( 236) 00:09:36.090 5646.178 - 5671.385: 14.9928% ( 237) 00:09:36.090 5671.385 - 5696.591: 16.2264% ( 240) 00:09:36.090 5696.591 - 5721.797: 17.4548% ( 239) 00:09:36.090 5721.797 - 5747.003: 18.6935% ( 241) 00:09:36.090 5747.003 - 5772.209: 19.9579% ( 246) 00:09:36.090 5772.209 - 5797.415: 21.1965% ( 241) 00:09:36.090 5797.415 - 5822.622: 22.4609% ( 246) 00:09:36.090 5822.622 - 5847.828: 23.7305% ( 247) 00:09:36.090 5847.828 - 5873.034: 24.9846% ( 244) 00:09:36.090 5873.034 - 5898.240: 26.2644% ( 249) 00:09:36.090 5898.240 - 5923.446: 27.5545% ( 251) 00:09:36.090 5923.446 - 5948.652: 28.8549% ( 253) 00:09:36.090 5948.652 - 5973.858: 30.1552% ( 253) 00:09:36.090 5973.858 - 5999.065: 31.4299% ( 248) 00:09:36.090 5999.065 - 6024.271: 32.7354% ( 254) 00:09:36.090 6024.271 - 6049.477: 34.0461% ( 255) 00:09:36.090 6049.477 - 6074.683: 35.3618% ( 256) 00:09:36.090 6074.683 - 6099.889: 36.6519% ( 251) 00:09:36.090 6099.889 - 6125.095: 37.9986% ( 262) 00:09:36.090 6125.095 - 6150.302: 39.2887% ( 251) 00:09:36.090 6150.302 - 6175.508: 40.5993% ( 255) 00:09:36.090 6175.508 - 6200.714: 41.9151% ( 256) 00:09:36.090 6200.714 - 6225.920: 43.2257% ( 255) 00:09:36.090 6225.920 - 6251.126: 44.5312% ( 254) 00:09:36.090 6251.126 - 6276.332: 45.8111% ( 249) 00:09:36.090 6276.332 - 6301.538: 47.1217% ( 255) 00:09:36.090 6301.538 - 6326.745: 48.4581% ( 260) 00:09:36.090 6326.745 - 6351.951: 49.7841% ( 258) 00:09:36.090 6351.951 - 6377.157: 51.0948% ( 255) 00:09:36.090 6377.157 - 6402.363: 52.4003% ( 254) 00:09:36.090 6402.363 - 6427.569: 53.7161% ( 256) 00:09:36.090 6427.569 - 6452.775: 55.0524% ( 260) 00:09:36.090 6452.775 - 6503.188: 57.6326% ( 502) 00:09:36.090 6503.188 - 6553.600: 60.2693% ( 513) 00:09:36.090 6553.600 - 6604.012: 62.9266% ( 517) 00:09:36.090 6604.012 - 6654.425: 65.5993% ( 520) 00:09:36.090 6654.425 - 6704.837: 68.2720% ( 520) 00:09:36.090 6704.837 - 6755.249: 70.9601% ( 523) 00:09:36.090 6755.249 - 6805.662: 73.6431% ( 522) 00:09:36.090 6805.662 - 6856.074: 76.3415% ( 525) 00:09:36.090 6856.074 - 6906.486: 78.9628% ( 510) 00:09:36.090 6906.486 - 6956.898: 81.4402% ( 482) 00:09:36.090 6956.898 - 7007.311: 83.4807% ( 397) 00:09:36.090 7007.311 - 7057.723: 85.0894% ( 313) 00:09:36.090 7057.723 - 7108.135: 86.4463% ( 264) 00:09:36.090 7108.135 - 7158.548: 87.5308% ( 211) 00:09:36.090 7158.548 - 7208.960: 88.3686% ( 163) 00:09:36.090 7208.960 - 7259.372: 88.9494% ( 113) 00:09:36.090 7259.372 - 7309.785: 89.3863% ( 85) 00:09:36.090 7309.785 - 7360.197: 89.7512% ( 71) 00:09:36.090 7360.197 - 7410.609: 90.0391% ( 56) 00:09:36.090 7410.609 - 7461.022: 90.3166% ( 54) 00:09:36.090 7461.022 - 7511.434: 90.5376% ( 43) 00:09:36.090 7511.434 - 7561.846: 90.7381% ( 39) 00:09:36.090 7561.846 - 7612.258: 90.9077% ( 33) 00:09:36.090 7612.258 - 7662.671: 91.0876% ( 35) 00:09:36.090 7662.671 - 7713.083: 91.2366% ( 29) 00:09:36.090 7713.083 - 7763.495: 91.3857% ( 29) 00:09:36.090 7763.495 - 7813.908: 91.5399% ( 30) 00:09:36.090 7813.908 - 7864.320: 91.6632% ( 24) 00:09:36.090 7864.320 - 7914.732: 91.8072% ( 28) 00:09:36.090 7914.732 - 7965.145: 91.9459% ( 27) 00:09:36.090 7965.145 - 8015.557: 92.0796% ( 26) 00:09:36.090 8015.557 - 8065.969: 92.1978% ( 23) 00:09:36.090 8065.969 - 8116.382: 92.3571% ( 31) 00:09:36.090 8116.382 - 8166.794: 92.4805% ( 24) 00:09:36.090 8166.794 - 8217.206: 92.6295% ( 29) 00:09:36.090 8217.206 - 8267.618: 92.7786% ( 29) 00:09:36.090 8267.618 - 8318.031: 92.9122% ( 26) 00:09:36.090 8318.031 - 8368.443: 93.0561% ( 28) 00:09:36.090 8368.443 - 8418.855: 93.1949% ( 27) 00:09:36.090 8418.855 - 8469.268: 93.3131% ( 23) 00:09:36.090 8469.268 - 8519.680: 93.4262% ( 22) 00:09:36.090 8519.680 - 8570.092: 93.5701% ( 28) 00:09:36.090 8570.092 - 8620.505: 93.6729% ( 20) 00:09:36.090 8620.505 - 8670.917: 93.7963% ( 24) 00:09:36.090 8670.917 - 8721.329: 93.9093% ( 22) 00:09:36.090 8721.329 - 8771.742: 94.0121% ( 20) 00:09:36.090 8771.742 - 8822.154: 94.1509% ( 27) 00:09:36.090 8822.154 - 8872.566: 94.2640% ( 22) 00:09:36.090 8872.566 - 8922.978: 94.3719% ( 21) 00:09:36.090 8922.978 - 8973.391: 94.4696% ( 19) 00:09:36.090 8973.391 - 9023.803: 94.5569% ( 17) 00:09:36.090 9023.803 - 9074.215: 94.6546% ( 19) 00:09:36.090 9074.215 - 9124.628: 94.7368% ( 16) 00:09:36.090 9124.628 - 9175.040: 94.8294% ( 18) 00:09:36.090 9175.040 - 9225.452: 94.9219% ( 18) 00:09:36.090 9225.452 - 9275.865: 95.0195% ( 19) 00:09:36.090 9275.865 - 9326.277: 95.1275% ( 21) 00:09:36.091 9326.277 - 9376.689: 95.2251% ( 19) 00:09:36.091 9376.689 - 9427.102: 95.3331% ( 21) 00:09:36.091 9427.102 - 9477.514: 95.4461% ( 22) 00:09:36.091 9477.514 - 9527.926: 95.5541% ( 21) 00:09:36.091 9527.926 - 9578.338: 95.6620% ( 21) 00:09:36.091 9578.338 - 9628.751: 95.7854% ( 24) 00:09:36.091 9628.751 - 9679.163: 95.8984% ( 22) 00:09:36.091 9679.163 - 9729.575: 96.0115% ( 22) 00:09:36.091 9729.575 - 9779.988: 96.1246% ( 22) 00:09:36.091 9779.988 - 9830.400: 96.2479% ( 24) 00:09:36.091 9830.400 - 9880.812: 96.3456% ( 19) 00:09:36.091 9880.812 - 9931.225: 96.4535% ( 21) 00:09:36.091 9931.225 - 9981.637: 96.5512% ( 19) 00:09:36.091 9981.637 - 10032.049: 96.6437% ( 18) 00:09:36.091 10032.049 - 10082.462: 96.7054% ( 12) 00:09:36.091 10082.462 - 10132.874: 96.7979% ( 18) 00:09:36.091 10132.874 - 10183.286: 96.8801% ( 16) 00:09:36.091 10183.286 - 10233.698: 96.9624% ( 16) 00:09:36.091 10233.698 - 10284.111: 97.0292% ( 13) 00:09:36.091 10284.111 - 10334.523: 97.0909% ( 12) 00:09:36.091 10334.523 - 10384.935: 97.1628% ( 14) 00:09:36.091 10384.935 - 10435.348: 97.2348% ( 14) 00:09:36.091 10435.348 - 10485.760: 97.3016% ( 13) 00:09:36.091 10485.760 - 10536.172: 97.3736% ( 14) 00:09:36.091 10536.172 - 10586.585: 97.4404% ( 13) 00:09:36.091 10586.585 - 10636.997: 97.4866% ( 9) 00:09:36.091 10636.997 - 10687.409: 97.5278% ( 8) 00:09:36.091 10687.409 - 10737.822: 97.5586% ( 6) 00:09:36.091 10737.822 - 10788.234: 97.5894% ( 6) 00:09:36.091 10788.234 - 10838.646: 97.6254% ( 7) 00:09:36.091 10838.646 - 10889.058: 97.6717% ( 9) 00:09:36.091 10889.058 - 10939.471: 97.7282% ( 11) 00:09:36.091 10939.471 - 10989.883: 97.7796% ( 10) 00:09:36.091 10989.883 - 11040.295: 97.8361% ( 11) 00:09:36.091 11040.295 - 11090.708: 97.8824% ( 9) 00:09:36.091 11090.708 - 11141.120: 97.9338% ( 10) 00:09:36.091 11141.120 - 11191.532: 97.9698% ( 7) 00:09:36.091 11191.532 - 11241.945: 98.0160% ( 9) 00:09:36.091 11241.945 - 11292.357: 98.0777% ( 12) 00:09:36.091 11292.357 - 11342.769: 98.1291% ( 10) 00:09:36.091 11342.769 - 11393.182: 98.1754% ( 9) 00:09:36.091 11393.182 - 11443.594: 98.2370% ( 12) 00:09:36.091 11443.594 - 11494.006: 98.2936% ( 11) 00:09:36.091 11494.006 - 11544.418: 98.3450% ( 10) 00:09:36.091 11544.418 - 11594.831: 98.3912% ( 9) 00:09:36.091 11594.831 - 11645.243: 98.4478% ( 11) 00:09:36.091 11645.243 - 11695.655: 98.4940% ( 9) 00:09:36.091 11695.655 - 11746.068: 98.5403% ( 9) 00:09:36.091 11746.068 - 11796.480: 98.5968% ( 11) 00:09:36.091 11796.480 - 11846.892: 98.6534% ( 11) 00:09:36.091 11846.892 - 11897.305: 98.7048% ( 10) 00:09:36.091 11897.305 - 11947.717: 98.7459% ( 8) 00:09:36.091 11947.717 - 11998.129: 98.7870% ( 8) 00:09:36.091 11998.129 - 12048.542: 98.8281% ( 8) 00:09:36.091 12048.542 - 12098.954: 98.8692% ( 8) 00:09:36.091 12098.954 - 12149.366: 98.9052% ( 7) 00:09:36.091 12149.366 - 12199.778: 98.9463% ( 8) 00:09:36.091 12199.778 - 12250.191: 98.9823% ( 7) 00:09:36.091 12250.191 - 12300.603: 99.0183% ( 7) 00:09:36.091 12300.603 - 12351.015: 99.0594% ( 8) 00:09:36.091 12351.015 - 12401.428: 99.0903% ( 6) 00:09:36.091 12401.428 - 12451.840: 99.1262% ( 7) 00:09:36.091 12451.840 - 12502.252: 99.1674% ( 8) 00:09:36.091 12502.252 - 12552.665: 99.2033% ( 7) 00:09:36.091 12552.665 - 12603.077: 99.2239% ( 4) 00:09:36.091 12603.077 - 12653.489: 99.2444% ( 4) 00:09:36.091 12653.489 - 12703.902: 99.2650% ( 4) 00:09:36.091 12703.902 - 12754.314: 99.2856% ( 4) 00:09:36.091 12754.314 - 12804.726: 99.3061% ( 4) 00:09:36.091 12804.726 - 12855.138: 99.3267% ( 4) 00:09:36.091 12855.138 - 12905.551: 99.3421% ( 3) 00:09:36.091 13913.797 - 14014.622: 99.3472% ( 1) 00:09:36.091 14014.622 - 14115.446: 99.3729% ( 5) 00:09:36.091 14115.446 - 14216.271: 99.3935% ( 4) 00:09:36.091 14216.271 - 14317.095: 99.4141% ( 4) 00:09:36.091 14317.095 - 14417.920: 99.4346% ( 4) 00:09:36.091 14417.920 - 14518.745: 99.4552% ( 4) 00:09:36.091 14518.745 - 14619.569: 99.4809% ( 5) 00:09:36.091 14619.569 - 14720.394: 99.5014% ( 4) 00:09:36.091 14720.394 - 14821.218: 99.5220% ( 4) 00:09:36.091 14821.218 - 14922.043: 99.5426% ( 4) 00:09:36.091 14922.043 - 15022.868: 99.5631% ( 4) 00:09:36.091 15022.868 - 15123.692: 99.5837% ( 4) 00:09:36.091 15123.692 - 15224.517: 99.6042% ( 4) 00:09:36.091 15224.517 - 15325.342: 99.6248% ( 4) 00:09:36.091 15325.342 - 15426.166: 99.6454% ( 4) 00:09:36.091 15426.166 - 15526.991: 99.6659% ( 4) 00:09:36.091 15526.991 - 15627.815: 99.6916% ( 5) 00:09:36.091 15627.815 - 15728.640: 99.7122% ( 4) 00:09:36.091 15728.640 - 15829.465: 99.7327% ( 4) 00:09:36.091 15829.465 - 15930.289: 99.7533% ( 4) 00:09:36.091 15930.289 - 16031.114: 99.7738% ( 4) 00:09:36.091 16031.114 - 16131.938: 99.7944% ( 4) 00:09:36.091 16131.938 - 16232.763: 99.8150% ( 4) 00:09:36.091 16232.763 - 16333.588: 99.8355% ( 4) 00:09:36.091 16333.588 - 16434.412: 99.8561% ( 4) 00:09:36.091 16434.412 - 16535.237: 99.8818% ( 5) 00:09:36.091 16535.237 - 16636.062: 99.9023% ( 4) 00:09:36.091 16636.062 - 16736.886: 99.9229% ( 4) 00:09:36.091 16736.886 - 16837.711: 99.9486% ( 5) 00:09:36.091 16837.711 - 16938.535: 99.9692% ( 4) 00:09:36.091 16938.535 - 17039.360: 99.9897% ( 4) 00:09:36.091 17039.360 - 17140.185: 100.0000% ( 2) 00:09:36.091 00:09:36.091 20:02:43 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:37.469 Initializing NVMe Controllers 00:09:37.469 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:37.469 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:37.469 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:37.469 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:37.469 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:37.469 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:37.469 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:37.469 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:37.469 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:37.469 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:37.469 Initialization complete. Launching workers. 00:09:37.469 ======================================================== 00:09:37.469 Latency(us) 00:09:37.469 Device Information : IOPS MiB/s Average min max 00:09:37.469 PCIE (0000:00:09.0) NSID 1 from core 0: 19356.93 226.84 6610.53 5165.63 27634.80 00:09:37.469 PCIE (0000:00:06.0) NSID 1 from core 0: 19356.93 226.84 6605.37 4934.82 26810.69 00:09:37.469 PCIE (0000:00:07.0) NSID 1 from core 0: 19356.93 226.84 6599.85 5221.35 25432.17 00:09:37.469 PCIE (0000:00:08.0) NSID 1 from core 0: 19356.93 226.84 6594.57 4921.70 24247.32 00:09:37.469 PCIE (0000:00:08.0) NSID 2 from core 0: 19356.93 226.84 6589.32 5149.02 23020.44 00:09:37.469 PCIE (0000:00:08.0) NSID 3 from core 0: 19484.28 228.33 6541.17 5239.93 15288.12 00:09:37.469 ======================================================== 00:09:37.469 Total : 116268.94 1362.53 6590.08 4921.70 27634.80 00:09:37.469 00:09:37.469 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:37.469 ================================================================================= 00:09:37.469 1.00000% : 5646.178us 00:09:37.469 10.00000% : 5973.858us 00:09:37.469 25.00000% : 6175.508us 00:09:37.469 50.00000% : 6402.363us 00:09:37.469 75.00000% : 6755.249us 00:09:37.469 90.00000% : 7057.723us 00:09:37.469 95.00000% : 7360.197us 00:09:37.469 98.00000% : 8519.680us 00:09:37.469 99.00000% : 10032.049us 00:09:37.469 99.50000% : 25508.628us 00:09:37.469 99.90000% : 27222.646us 00:09:37.469 99.99000% : 27625.945us 00:09:37.469 99.99900% : 27827.594us 00:09:37.469 99.99990% : 27827.594us 00:09:37.469 99.99999% : 27827.594us 00:09:37.469 00:09:37.469 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:37.469 ================================================================================= 00:09:37.469 1.00000% : 5343.705us 00:09:37.469 10.00000% : 5772.209us 00:09:37.469 25.00000% : 5999.065us 00:09:37.469 50.00000% : 6427.569us 00:09:37.469 75.00000% : 6906.486us 00:09:37.469 90.00000% : 7309.785us 00:09:37.469 95.00000% : 7662.671us 00:09:37.469 98.00000% : 8822.154us 00:09:37.469 99.00000% : 9830.400us 00:09:37.469 99.50000% : 24097.083us 00:09:37.469 99.90000% : 26416.049us 00:09:37.469 99.99000% : 26819.348us 00:09:37.469 99.99900% : 26819.348us 00:09:37.469 99.99990% : 26819.348us 00:09:37.469 99.99999% : 26819.348us 00:09:37.469 00:09:37.469 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:37.469 ================================================================================= 00:09:37.469 1.00000% : 5721.797us 00:09:37.469 10.00000% : 5973.858us 00:09:37.469 25.00000% : 6150.302us 00:09:37.469 50.00000% : 6377.157us 00:09:37.469 75.00000% : 6755.249us 00:09:37.469 90.00000% : 7108.135us 00:09:37.469 95.00000% : 7461.022us 00:09:37.469 98.00000% : 8267.618us 00:09:37.469 99.00000% : 10435.348us 00:09:37.469 99.50000% : 22988.012us 00:09:37.469 99.90000% : 25004.505us 00:09:37.469 99.99000% : 25407.803us 00:09:37.469 99.99900% : 25508.628us 00:09:37.469 99.99990% : 25508.628us 00:09:37.469 99.99999% : 25508.628us 00:09:37.469 00:09:37.469 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:37.469 ================================================================================= 00:09:37.469 1.00000% : 5570.560us 00:09:37.469 10.00000% : 5948.652us 00:09:37.469 25.00000% : 6150.302us 00:09:37.469 50.00000% : 6402.363us 00:09:37.469 75.00000% : 6755.249us 00:09:37.469 90.00000% : 7057.723us 00:09:37.469 95.00000% : 7410.609us 00:09:37.469 98.00000% : 8469.268us 00:09:37.469 99.00000% : 12048.542us 00:09:37.469 99.50000% : 21778.117us 00:09:37.469 99.90000% : 23794.609us 00:09:37.469 99.99000% : 24298.732us 00:09:37.469 99.99900% : 24298.732us 00:09:37.469 99.99990% : 24298.732us 00:09:37.469 99.99999% : 24298.732us 00:09:37.469 00:09:37.469 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:37.469 ================================================================================= 00:09:37.469 1.00000% : 5671.385us 00:09:37.469 10.00000% : 5973.858us 00:09:37.469 25.00000% : 6175.508us 00:09:37.469 50.00000% : 6402.363us 00:09:37.469 75.00000% : 6755.249us 00:09:37.469 90.00000% : 7057.723us 00:09:37.469 95.00000% : 7360.197us 00:09:37.469 98.00000% : 8519.680us 00:09:37.469 99.00000% : 12149.366us 00:09:37.469 99.50000% : 20568.222us 00:09:37.469 99.90000% : 22584.714us 00:09:37.469 99.99000% : 23088.837us 00:09:37.469 99.99900% : 23088.837us 00:09:37.469 99.99990% : 23088.837us 00:09:37.469 99.99999% : 23088.837us 00:09:37.469 00:09:37.469 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:37.469 ================================================================================= 00:09:37.469 1.00000% : 5671.385us 00:09:37.469 10.00000% : 5973.858us 00:09:37.469 25.00000% : 6175.508us 00:09:37.469 50.00000% : 6402.363us 00:09:37.469 75.00000% : 6755.249us 00:09:37.469 90.00000% : 7108.135us 00:09:37.469 95.00000% : 7309.785us 00:09:37.469 98.00000% : 8721.329us 00:09:37.469 99.00000% : 11292.357us 00:09:37.469 99.50000% : 13006.375us 00:09:37.469 99.90000% : 14922.043us 00:09:37.469 99.99000% : 15325.342us 00:09:37.469 99.99900% : 15325.342us 00:09:37.469 99.99990% : 15325.342us 00:09:37.469 99.99999% : 15325.342us 00:09:37.469 00:09:37.469 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:37.469 ============================================================================== 00:09:37.469 Range in us Cumulative IO count 00:09:37.469 5142.055 - 5167.262: 0.0051% ( 1) 00:09:37.469 5192.468 - 5217.674: 0.0154% ( 2) 00:09:37.469 5217.674 - 5242.880: 0.0257% ( 2) 00:09:37.469 5242.880 - 5268.086: 0.0360% ( 2) 00:09:37.469 5268.086 - 5293.292: 0.0514% ( 3) 00:09:37.469 5293.292 - 5318.498: 0.0668% ( 3) 00:09:37.469 5318.498 - 5343.705: 0.0720% ( 1) 00:09:37.469 5343.705 - 5368.911: 0.1028% ( 6) 00:09:37.469 5368.911 - 5394.117: 0.1439% ( 8) 00:09:37.469 5394.117 - 5419.323: 0.1953% ( 10) 00:09:37.469 5419.323 - 5444.529: 0.2313% ( 7) 00:09:37.469 5444.529 - 5469.735: 0.2673% ( 7) 00:09:37.469 5469.735 - 5494.942: 0.3135% ( 9) 00:09:37.469 5494.942 - 5520.148: 0.3752% ( 12) 00:09:37.469 5520.148 - 5545.354: 0.4626% ( 17) 00:09:37.469 5545.354 - 5570.560: 0.5397% ( 15) 00:09:37.469 5570.560 - 5595.766: 0.6373% ( 19) 00:09:37.469 5595.766 - 5620.972: 0.8224% ( 36) 00:09:37.469 5620.972 - 5646.178: 1.0177% ( 38) 00:09:37.469 5646.178 - 5671.385: 1.2541% ( 46) 00:09:37.469 5671.385 - 5696.591: 1.5471% ( 57) 00:09:37.469 5696.591 - 5721.797: 1.8709% ( 63) 00:09:37.469 5721.797 - 5747.003: 2.2204% ( 68) 00:09:37.469 5747.003 - 5772.209: 2.6881% ( 91) 00:09:37.469 5772.209 - 5797.415: 3.2278% ( 105) 00:09:37.469 5797.415 - 5822.622: 3.9371% ( 138) 00:09:37.469 5822.622 - 5847.828: 4.7954% ( 167) 00:09:37.469 5847.828 - 5873.034: 5.7874% ( 193) 00:09:37.469 5873.034 - 5898.240: 7.1289% ( 261) 00:09:37.469 5898.240 - 5923.446: 8.4190% ( 251) 00:09:37.469 5923.446 - 5948.652: 9.8890% ( 286) 00:09:37.469 5948.652 - 5973.858: 11.4618% ( 306) 00:09:37.469 5973.858 - 5999.065: 13.0500% ( 309) 00:09:37.470 5999.065 - 6024.271: 14.7153% ( 324) 00:09:37.470 6024.271 - 6049.477: 16.4114% ( 330) 00:09:37.470 6049.477 - 6074.683: 18.4313% ( 393) 00:09:37.470 6074.683 - 6099.889: 20.3947% ( 382) 00:09:37.470 6099.889 - 6125.095: 22.3273% ( 376) 00:09:37.470 6125.095 - 6150.302: 24.4500% ( 413) 00:09:37.470 6150.302 - 6175.508: 26.5985% ( 418) 00:09:37.470 6175.508 - 6200.714: 28.9165% ( 451) 00:09:37.470 6200.714 - 6225.920: 31.7331% ( 548) 00:09:37.470 6225.920 - 6251.126: 34.7862% ( 594) 00:09:37.470 6251.126 - 6276.332: 37.9112% ( 608) 00:09:37.470 6276.332 - 6301.538: 40.7586% ( 554) 00:09:37.470 6301.538 - 6326.745: 43.5907% ( 551) 00:09:37.470 6326.745 - 6351.951: 46.3816% ( 543) 00:09:37.470 6351.951 - 6377.157: 48.6123% ( 434) 00:09:37.470 6377.157 - 6402.363: 51.0074% ( 466) 00:09:37.470 6402.363 - 6427.569: 53.4539% ( 476) 00:09:37.470 6427.569 - 6452.775: 55.9776% ( 491) 00:09:37.470 6452.775 - 6503.188: 59.5292% ( 691) 00:09:37.470 6503.188 - 6553.600: 63.6873% ( 809) 00:09:37.470 6553.600 - 6604.012: 67.2132% ( 686) 00:09:37.470 6604.012 - 6654.425: 70.3125% ( 603) 00:09:37.470 6654.425 - 6704.837: 73.9566% ( 709) 00:09:37.470 6704.837 - 6755.249: 76.8041% ( 554) 00:09:37.470 6755.249 - 6805.662: 79.4202% ( 509) 00:09:37.470 6805.662 - 6856.074: 82.5504% ( 609) 00:09:37.470 6856.074 - 6906.486: 84.8787% ( 453) 00:09:37.470 6906.486 - 6956.898: 87.1248% ( 437) 00:09:37.470 6956.898 - 7007.311: 89.1139% ( 387) 00:09:37.470 7007.311 - 7057.723: 90.6918% ( 307) 00:09:37.470 7057.723 - 7108.135: 91.9768% ( 250) 00:09:37.470 7108.135 - 7158.548: 92.8762% ( 175) 00:09:37.470 7158.548 - 7208.960: 93.6678% ( 154) 00:09:37.470 7208.960 - 7259.372: 94.3000% ( 123) 00:09:37.470 7259.372 - 7309.785: 94.8345% ( 104) 00:09:37.470 7309.785 - 7360.197: 95.2971% ( 90) 00:09:37.470 7360.197 - 7410.609: 95.8162% ( 101) 00:09:37.470 7410.609 - 7461.022: 96.1760% ( 70) 00:09:37.470 7461.022 - 7511.434: 96.3867% ( 41) 00:09:37.470 7511.434 - 7561.846: 96.5923% ( 40) 00:09:37.470 7561.846 - 7612.258: 96.7671% ( 34) 00:09:37.470 7612.258 - 7662.671: 96.9161% ( 29) 00:09:37.470 7662.671 - 7713.083: 97.0395% ( 24) 00:09:37.470 7713.083 - 7763.495: 97.1371% ( 19) 00:09:37.470 7763.495 - 7813.908: 97.2245% ( 17) 00:09:37.470 7813.908 - 7864.320: 97.3067% ( 16) 00:09:37.470 7864.320 - 7914.732: 97.3941% ( 17) 00:09:37.470 7914.732 - 7965.145: 97.4661% ( 14) 00:09:37.470 7965.145 - 8015.557: 97.5226% ( 11) 00:09:37.470 8015.557 - 8065.969: 97.5689% ( 9) 00:09:37.470 8065.969 - 8116.382: 97.6254% ( 11) 00:09:37.470 8116.382 - 8166.794: 97.6665% ( 8) 00:09:37.470 8166.794 - 8217.206: 97.7179% ( 10) 00:09:37.470 8217.206 - 8267.618: 97.7642% ( 9) 00:09:37.470 8267.618 - 8318.031: 97.8053% ( 8) 00:09:37.470 8318.031 - 8368.443: 97.8567% ( 10) 00:09:37.470 8368.443 - 8418.855: 97.9235% ( 13) 00:09:37.470 8418.855 - 8469.268: 97.9903% ( 13) 00:09:37.470 8469.268 - 8519.680: 98.0417% ( 10) 00:09:37.470 8519.680 - 8570.092: 98.2370% ( 38) 00:09:37.470 8570.092 - 8620.505: 98.2782% ( 8) 00:09:37.470 8620.505 - 8670.917: 98.2987% ( 4) 00:09:37.470 8670.917 - 8721.329: 98.3244% ( 5) 00:09:37.470 8721.329 - 8771.742: 98.3501% ( 5) 00:09:37.470 8771.742 - 8822.154: 98.3810% ( 6) 00:09:37.470 8822.154 - 8872.566: 98.4015% ( 4) 00:09:37.470 8872.566 - 8922.978: 98.4272% ( 5) 00:09:37.470 8922.978 - 8973.391: 98.4529% ( 5) 00:09:37.470 8973.391 - 9023.803: 98.4786% ( 5) 00:09:37.470 9023.803 - 9074.215: 98.5043% ( 5) 00:09:37.470 9074.215 - 9124.628: 98.5300% ( 5) 00:09:37.470 9124.628 - 9175.040: 98.5609% ( 6) 00:09:37.470 9175.040 - 9225.452: 98.6071% ( 9) 00:09:37.470 9225.452 - 9275.865: 98.6482% ( 8) 00:09:37.470 9275.865 - 9326.277: 98.6739% ( 5) 00:09:37.470 9326.277 - 9376.689: 98.7099% ( 7) 00:09:37.470 9376.689 - 9427.102: 98.7407% ( 6) 00:09:37.470 9427.102 - 9477.514: 98.7716% ( 6) 00:09:37.470 9477.514 - 9527.926: 98.7973% ( 5) 00:09:37.470 9527.926 - 9578.338: 98.8281% ( 6) 00:09:37.470 9578.338 - 9628.751: 98.8538% ( 5) 00:09:37.470 9628.751 - 9679.163: 98.8744% ( 4) 00:09:37.470 9679.163 - 9729.575: 98.8898% ( 3) 00:09:37.470 9729.575 - 9779.988: 98.9104% ( 4) 00:09:37.470 9779.988 - 9830.400: 98.9309% ( 4) 00:09:37.470 9830.400 - 9880.812: 98.9515% ( 4) 00:09:37.470 9880.812 - 9931.225: 98.9720% ( 4) 00:09:37.470 9931.225 - 9981.637: 98.9875% ( 3) 00:09:37.470 9981.637 - 10032.049: 99.0080% ( 4) 00:09:37.470 10032.049 - 10082.462: 99.0234% ( 3) 00:09:37.470 10082.462 - 10132.874: 99.0440% ( 4) 00:09:37.470 10132.874 - 10183.286: 99.0594% ( 3) 00:09:37.470 10183.286 - 10233.698: 99.0800% ( 4) 00:09:37.470 10233.698 - 10284.111: 99.0954% ( 3) 00:09:37.470 10284.111 - 10334.523: 99.1160% ( 4) 00:09:37.470 10334.523 - 10384.935: 99.1314% ( 3) 00:09:37.470 10384.935 - 10435.348: 99.1468% ( 3) 00:09:37.470 10435.348 - 10485.760: 99.1622% ( 3) 00:09:37.470 10485.760 - 10536.172: 99.1776% ( 3) 00:09:37.470 10536.172 - 10586.585: 99.1931% ( 3) 00:09:37.470 10586.585 - 10636.997: 99.2136% ( 4) 00:09:37.470 10636.997 - 10687.409: 99.2290% ( 3) 00:09:37.470 10687.409 - 10737.822: 99.2444% ( 3) 00:09:37.470 10737.822 - 10788.234: 99.2599% ( 3) 00:09:37.470 10788.234 - 10838.646: 99.2753% ( 3) 00:09:37.470 10838.646 - 10889.058: 99.2907% ( 3) 00:09:37.470 10889.058 - 10939.471: 99.3061% ( 3) 00:09:37.470 10939.471 - 10989.883: 99.3215% ( 3) 00:09:37.470 10989.883 - 11040.295: 99.3421% ( 4) 00:09:37.470 25105.329 - 25206.154: 99.3472% ( 1) 00:09:37.470 25306.978 - 25407.803: 99.4706% ( 24) 00:09:37.470 25407.803 - 25508.628: 99.5940% ( 24) 00:09:37.470 25508.628 - 25609.452: 99.6402% ( 9) 00:09:37.470 25609.452 - 25710.277: 99.6556% ( 3) 00:09:37.470 25710.277 - 25811.102: 99.6762% ( 4) 00:09:37.470 25811.102 - 26012.751: 99.7122% ( 7) 00:09:37.470 26012.751 - 26214.400: 99.7481% ( 7) 00:09:37.470 26214.400 - 26416.049: 99.7841% ( 7) 00:09:37.470 26416.049 - 26617.698: 99.8201% ( 7) 00:09:37.470 26617.698 - 26819.348: 99.8561% ( 7) 00:09:37.470 26819.348 - 27020.997: 99.8921% ( 7) 00:09:37.470 27020.997 - 27222.646: 99.9332% ( 8) 00:09:37.470 27222.646 - 27424.295: 99.9640% ( 6) 00:09:37.470 27424.295 - 27625.945: 99.9949% ( 6) 00:09:37.470 27625.945 - 27827.594: 100.0000% ( 1) 00:09:37.470 00:09:37.470 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:37.470 ============================================================================== 00:09:37.470 Range in us Cumulative IO count 00:09:37.470 4915.200 - 4940.406: 0.0051% ( 1) 00:09:37.470 4940.406 - 4965.612: 0.0411% ( 7) 00:09:37.470 4965.612 - 4990.818: 0.0668% ( 5) 00:09:37.470 4990.818 - 5016.025: 0.0925% ( 5) 00:09:37.470 5016.025 - 5041.231: 0.1388% ( 9) 00:09:37.470 5041.231 - 5066.437: 0.1748% ( 7) 00:09:37.470 5066.437 - 5091.643: 0.2107% ( 7) 00:09:37.470 5091.643 - 5116.849: 0.2673% ( 11) 00:09:37.470 5116.849 - 5142.055: 0.3187% ( 10) 00:09:37.470 5142.055 - 5167.262: 0.3906% ( 14) 00:09:37.470 5167.262 - 5192.468: 0.4626% ( 14) 00:09:37.470 5192.468 - 5217.674: 0.5191% ( 11) 00:09:37.470 5217.674 - 5242.880: 0.6116% ( 18) 00:09:37.470 5242.880 - 5268.086: 0.6990% ( 17) 00:09:37.470 5268.086 - 5293.292: 0.7915% ( 18) 00:09:37.470 5293.292 - 5318.498: 0.9200% ( 25) 00:09:37.470 5318.498 - 5343.705: 1.0794% ( 31) 00:09:37.470 5343.705 - 5368.911: 1.2233% ( 28) 00:09:37.470 5368.911 - 5394.117: 1.4083% ( 36) 00:09:37.470 5394.117 - 5419.323: 1.5522% ( 28) 00:09:37.470 5419.323 - 5444.529: 1.7630% ( 41) 00:09:37.470 5444.529 - 5469.735: 1.9891% ( 44) 00:09:37.470 5469.735 - 5494.942: 2.3386% ( 68) 00:09:37.470 5494.942 - 5520.148: 2.6778% ( 66) 00:09:37.470 5520.148 - 5545.354: 3.1199% ( 86) 00:09:37.470 5545.354 - 5570.560: 3.5310% ( 80) 00:09:37.470 5570.560 - 5595.766: 4.0193% ( 95) 00:09:37.470 5595.766 - 5620.972: 4.6926% ( 131) 00:09:37.470 5620.972 - 5646.178: 5.4225% ( 142) 00:09:37.470 5646.178 - 5671.385: 6.0290% ( 118) 00:09:37.470 5671.385 - 5696.591: 7.1135% ( 211) 00:09:37.470 5696.591 - 5721.797: 8.1928% ( 210) 00:09:37.470 5721.797 - 5747.003: 9.4727% ( 249) 00:09:37.470 5747.003 - 5772.209: 10.8707% ( 272) 00:09:37.470 5772.209 - 5797.415: 12.4640% ( 310) 00:09:37.470 5797.415 - 5822.622: 13.9700% ( 293) 00:09:37.470 5822.622 - 5847.828: 15.5736% ( 312) 00:09:37.470 5847.828 - 5873.034: 16.9870% ( 275) 00:09:37.470 5873.034 - 5898.240: 18.4879% ( 292) 00:09:37.470 5898.240 - 5923.446: 20.2457% ( 342) 00:09:37.470 5923.446 - 5948.652: 22.0395% ( 349) 00:09:37.470 5948.652 - 5973.858: 23.6996% ( 323) 00:09:37.470 5973.858 - 5999.065: 25.3341% ( 318) 00:09:37.470 5999.065 - 6024.271: 26.8709% ( 299) 00:09:37.470 6024.271 - 6049.477: 28.4282% ( 303) 00:09:37.470 6049.477 - 6074.683: 30.0833% ( 322) 00:09:37.470 6074.683 - 6099.889: 31.9079% ( 355) 00:09:37.470 6099.889 - 6125.095: 33.9227% ( 392) 00:09:37.470 6125.095 - 6150.302: 35.5931% ( 325) 00:09:37.470 6150.302 - 6175.508: 37.3150% ( 335) 00:09:37.470 6175.508 - 6200.714: 38.8929% ( 307) 00:09:37.470 6200.714 - 6225.920: 40.3886% ( 291) 00:09:37.470 6225.920 - 6251.126: 41.6684% ( 249) 00:09:37.470 6251.126 - 6276.332: 43.0715% ( 273) 00:09:37.470 6276.332 - 6301.538: 44.3308% ( 245) 00:09:37.470 6301.538 - 6326.745: 45.5695% ( 241) 00:09:37.470 6326.745 - 6351.951: 46.8030% ( 240) 00:09:37.470 6351.951 - 6377.157: 48.2422% ( 280) 00:09:37.470 6377.157 - 6402.363: 49.8047% ( 304) 00:09:37.470 6402.363 - 6427.569: 51.4083% ( 312) 00:09:37.470 6427.569 - 6452.775: 52.9040% ( 291) 00:09:37.470 6452.775 - 6503.188: 55.9879% ( 600) 00:09:37.470 6503.188 - 6553.600: 58.7222% ( 532) 00:09:37.470 6553.600 - 6604.012: 61.4206% ( 525) 00:09:37.470 6604.012 - 6654.425: 64.2373% ( 548) 00:09:37.470 6654.425 - 6704.837: 67.0847% ( 554) 00:09:37.470 6704.837 - 6755.249: 69.8242% ( 533) 00:09:37.471 6755.249 - 6805.662: 72.1731% ( 457) 00:09:37.471 6805.662 - 6856.074: 74.4655% ( 446) 00:09:37.471 6856.074 - 6906.486: 76.5985% ( 415) 00:09:37.471 6906.486 - 6956.898: 78.8651% ( 441) 00:09:37.471 6956.898 - 7007.311: 81.0033% ( 416) 00:09:37.471 7007.311 - 7057.723: 83.1157% ( 411) 00:09:37.471 7057.723 - 7108.135: 84.9044% ( 348) 00:09:37.471 7108.135 - 7158.548: 86.4206% ( 295) 00:09:37.471 7158.548 - 7208.960: 87.8649% ( 281) 00:09:37.471 7208.960 - 7259.372: 89.1345% ( 247) 00:09:37.471 7259.372 - 7309.785: 90.2755% ( 222) 00:09:37.471 7309.785 - 7360.197: 91.2829% ( 196) 00:09:37.471 7360.197 - 7410.609: 92.1412% ( 167) 00:09:37.471 7410.609 - 7461.022: 92.9739% ( 162) 00:09:37.471 7461.022 - 7511.434: 93.6729% ( 136) 00:09:37.471 7511.434 - 7561.846: 94.2229% ( 107) 00:09:37.471 7561.846 - 7612.258: 94.8191% ( 116) 00:09:37.471 7612.258 - 7662.671: 95.2148% ( 77) 00:09:37.471 7662.671 - 7713.083: 95.6055% ( 76) 00:09:37.471 7713.083 - 7763.495: 95.8368% ( 45) 00:09:37.471 7763.495 - 7813.908: 96.1554% ( 62) 00:09:37.471 7813.908 - 7864.320: 96.3919% ( 46) 00:09:37.471 7864.320 - 7914.732: 96.5563% ( 32) 00:09:37.471 7914.732 - 7965.145: 96.7414% ( 36) 00:09:37.471 7965.145 - 8015.557: 96.8956% ( 30) 00:09:37.471 8015.557 - 8065.969: 97.0035% ( 21) 00:09:37.471 8065.969 - 8116.382: 97.1166% ( 22) 00:09:37.471 8116.382 - 8166.794: 97.3016% ( 36) 00:09:37.471 8166.794 - 8217.206: 97.3890% ( 17) 00:09:37.471 8217.206 - 8267.618: 97.4815% ( 18) 00:09:37.471 8267.618 - 8318.031: 97.5586% ( 15) 00:09:37.471 8318.031 - 8368.443: 97.6203% ( 12) 00:09:37.471 8368.443 - 8418.855: 97.6768% ( 11) 00:09:37.471 8418.855 - 8469.268: 97.7282% ( 10) 00:09:37.471 8469.268 - 8519.680: 97.7745% ( 9) 00:09:37.471 8519.680 - 8570.092: 97.8259% ( 10) 00:09:37.471 8570.092 - 8620.505: 97.8670% ( 8) 00:09:37.471 8620.505 - 8670.917: 97.9030% ( 7) 00:09:37.471 8670.917 - 8721.329: 97.9338% ( 6) 00:09:37.471 8721.329 - 8771.742: 97.9801% ( 9) 00:09:37.471 8771.742 - 8822.154: 98.0109% ( 6) 00:09:37.471 8822.154 - 8872.566: 98.0931% ( 16) 00:09:37.471 8872.566 - 8922.978: 98.1445% ( 10) 00:09:37.471 8922.978 - 8973.391: 98.1908% ( 9) 00:09:37.471 8973.391 - 9023.803: 98.2576% ( 13) 00:09:37.471 9023.803 - 9074.215: 98.3347% ( 15) 00:09:37.471 9074.215 - 9124.628: 98.3861% ( 10) 00:09:37.471 9124.628 - 9175.040: 98.4169% ( 6) 00:09:37.471 9175.040 - 9225.452: 98.4632% ( 9) 00:09:37.471 9225.452 - 9275.865: 98.5197% ( 11) 00:09:37.471 9275.865 - 9326.277: 98.5557% ( 7) 00:09:37.471 9326.277 - 9376.689: 98.6071% ( 10) 00:09:37.471 9376.689 - 9427.102: 98.6482% ( 8) 00:09:37.471 9427.102 - 9477.514: 98.7202% ( 14) 00:09:37.471 9477.514 - 9527.926: 98.7613% ( 8) 00:09:37.471 9527.926 - 9578.338: 98.8076% ( 9) 00:09:37.471 9578.338 - 9628.751: 98.8590% ( 10) 00:09:37.471 9628.751 - 9679.163: 98.8898% ( 6) 00:09:37.471 9679.163 - 9729.575: 98.9309% ( 8) 00:09:37.471 9729.575 - 9779.988: 98.9669% ( 7) 00:09:37.471 9779.988 - 9830.400: 99.0080% ( 8) 00:09:37.471 9830.400 - 9880.812: 99.0491% ( 8) 00:09:37.471 9880.812 - 9931.225: 99.0748% ( 5) 00:09:37.471 9931.225 - 9981.637: 99.1005% ( 5) 00:09:37.471 9981.637 - 10032.049: 99.1314% ( 6) 00:09:37.471 10032.049 - 10082.462: 99.1519% ( 4) 00:09:37.471 10082.462 - 10132.874: 99.1776% ( 5) 00:09:37.471 10132.874 - 10183.286: 99.2033% ( 5) 00:09:37.471 10183.286 - 10233.698: 99.2342% ( 6) 00:09:37.471 10233.698 - 10284.111: 99.2599% ( 5) 00:09:37.471 10284.111 - 10334.523: 99.2804% ( 4) 00:09:37.471 10334.523 - 10384.935: 99.2958% ( 3) 00:09:37.471 10384.935 - 10435.348: 99.3113% ( 3) 00:09:37.471 10435.348 - 10485.760: 99.3164% ( 1) 00:09:37.471 10485.760 - 10536.172: 99.3267% ( 2) 00:09:37.471 10536.172 - 10586.585: 99.3370% ( 2) 00:09:37.471 10586.585 - 10636.997: 99.3421% ( 1) 00:09:37.471 23189.662 - 23290.486: 99.3575% ( 3) 00:09:37.471 23290.486 - 23391.311: 99.3781% ( 4) 00:09:37.471 23391.311 - 23492.135: 99.3986% ( 4) 00:09:37.471 23492.135 - 23592.960: 99.4141% ( 3) 00:09:37.471 23592.960 - 23693.785: 99.4346% ( 4) 00:09:37.471 23693.785 - 23794.609: 99.4500% ( 3) 00:09:37.471 23794.609 - 23895.434: 99.4706% ( 4) 00:09:37.471 23895.434 - 23996.258: 99.4809% ( 2) 00:09:37.471 23996.258 - 24097.083: 99.5014% ( 4) 00:09:37.471 24097.083 - 24197.908: 99.5169% ( 3) 00:09:37.471 24197.908 - 24298.732: 99.5374% ( 4) 00:09:37.471 24298.732 - 24399.557: 99.5528% ( 3) 00:09:37.471 24399.557 - 24500.382: 99.5734% ( 4) 00:09:37.471 24500.382 - 24601.206: 99.5940% ( 4) 00:09:37.471 24601.206 - 24702.031: 99.6094% ( 3) 00:09:37.471 24702.031 - 24802.855: 99.6299% ( 4) 00:09:37.471 24802.855 - 24903.680: 99.6505% ( 4) 00:09:37.471 24903.680 - 25004.505: 99.6659% ( 3) 00:09:37.471 25004.505 - 25105.329: 99.6865% ( 4) 00:09:37.471 25105.329 - 25206.154: 99.6968% ( 2) 00:09:37.471 25206.154 - 25306.978: 99.7225% ( 5) 00:09:37.471 25306.978 - 25407.803: 99.7379% ( 3) 00:09:37.471 25407.803 - 25508.628: 99.7533% ( 3) 00:09:37.471 25508.628 - 25609.452: 99.7738% ( 4) 00:09:37.471 25609.452 - 25710.277: 99.7893% ( 3) 00:09:37.471 25710.277 - 25811.102: 99.8098% ( 4) 00:09:37.471 25811.102 - 26012.751: 99.8458% ( 7) 00:09:37.471 26012.751 - 26214.400: 99.8818% ( 7) 00:09:37.471 26214.400 - 26416.049: 99.9229% ( 8) 00:09:37.471 26416.049 - 26617.698: 99.9640% ( 8) 00:09:37.471 26617.698 - 26819.348: 100.0000% ( 7) 00:09:37.471 00:09:37.471 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:37.471 ============================================================================== 00:09:37.471 Range in us Cumulative IO count 00:09:37.471 5217.674 - 5242.880: 0.0051% ( 1) 00:09:37.471 5268.086 - 5293.292: 0.0154% ( 2) 00:09:37.471 5293.292 - 5318.498: 0.0206% ( 1) 00:09:37.471 5318.498 - 5343.705: 0.0257% ( 1) 00:09:37.471 5343.705 - 5368.911: 0.0308% ( 1) 00:09:37.471 5394.117 - 5419.323: 0.0411% ( 2) 00:09:37.471 5419.323 - 5444.529: 0.0668% ( 5) 00:09:37.471 5444.529 - 5469.735: 0.0925% ( 5) 00:09:37.471 5469.735 - 5494.942: 0.1234% ( 6) 00:09:37.471 5494.942 - 5520.148: 0.1542% ( 6) 00:09:37.471 5520.148 - 5545.354: 0.1748% ( 4) 00:09:37.471 5545.354 - 5570.560: 0.2159% ( 8) 00:09:37.471 5570.560 - 5595.766: 0.2673% ( 10) 00:09:37.471 5595.766 - 5620.972: 0.3392% ( 14) 00:09:37.471 5620.972 - 5646.178: 0.4472% ( 21) 00:09:37.471 5646.178 - 5671.385: 0.6116% ( 32) 00:09:37.471 5671.385 - 5696.591: 0.8635% ( 49) 00:09:37.471 5696.591 - 5721.797: 1.1873% ( 63) 00:09:37.471 5721.797 - 5747.003: 1.5933% ( 79) 00:09:37.471 5747.003 - 5772.209: 2.1227% ( 103) 00:09:37.471 5772.209 - 5797.415: 2.6830% ( 109) 00:09:37.471 5797.415 - 5822.622: 3.3460% ( 129) 00:09:37.471 5822.622 - 5847.828: 4.1478% ( 156) 00:09:37.471 5847.828 - 5873.034: 5.0935% ( 184) 00:09:37.471 5873.034 - 5898.240: 5.9981% ( 176) 00:09:37.471 5898.240 - 5923.446: 7.0878% ( 212) 00:09:37.471 5923.446 - 5948.652: 8.3779% ( 251) 00:09:37.471 5948.652 - 5973.858: 10.1460% ( 344) 00:09:37.471 5973.858 - 5999.065: 12.2687% ( 413) 00:09:37.471 5999.065 - 6024.271: 14.3503% ( 405) 00:09:37.471 6024.271 - 6049.477: 16.0208% ( 325) 00:09:37.471 6049.477 - 6074.683: 17.8814% ( 362) 00:09:37.471 6074.683 - 6099.889: 20.0144% ( 415) 00:09:37.471 6099.889 - 6125.095: 22.8053% ( 543) 00:09:37.471 6125.095 - 6150.302: 25.3084% ( 487) 00:09:37.471 6150.302 - 6175.508: 27.5699% ( 440) 00:09:37.471 6175.508 - 6200.714: 30.0678% ( 486) 00:09:37.471 6200.714 - 6225.920: 32.4681% ( 467) 00:09:37.471 6225.920 - 6251.126: 35.9324% ( 674) 00:09:37.471 6251.126 - 6276.332: 38.6256% ( 524) 00:09:37.471 6276.332 - 6301.538: 41.4833% ( 556) 00:09:37.471 6301.538 - 6326.745: 44.4079% ( 569) 00:09:37.471 6326.745 - 6351.951: 47.1320% ( 530) 00:09:37.471 6351.951 - 6377.157: 50.8583% ( 725) 00:09:37.471 6377.157 - 6402.363: 54.3277% ( 675) 00:09:37.471 6402.363 - 6427.569: 56.8668% ( 494) 00:09:37.471 6427.569 - 6452.775: 58.6811% ( 353) 00:09:37.471 6452.775 - 6503.188: 62.6336% ( 769) 00:09:37.471 6503.188 - 6553.600: 65.7792% ( 612) 00:09:37.471 6553.600 - 6604.012: 68.7654% ( 581) 00:09:37.471 6604.012 - 6654.425: 71.7208% ( 575) 00:09:37.471 6654.425 - 6704.837: 74.6762% ( 575) 00:09:37.471 6704.837 - 6755.249: 77.3592% ( 522) 00:09:37.471 6755.249 - 6805.662: 79.7543% ( 466) 00:09:37.471 6805.662 - 6856.074: 82.4424% ( 523) 00:09:37.471 6856.074 - 6906.486: 84.6782% ( 435) 00:09:37.471 6906.486 - 6956.898: 86.8678% ( 426) 00:09:37.471 6956.898 - 7007.311: 88.7387% ( 364) 00:09:37.471 7007.311 - 7057.723: 89.8643% ( 219) 00:09:37.471 7057.723 - 7108.135: 91.2212% ( 264) 00:09:37.471 7108.135 - 7158.548: 92.0384% ( 159) 00:09:37.471 7158.548 - 7208.960: 92.7529% ( 139) 00:09:37.471 7208.960 - 7259.372: 93.3748% ( 121) 00:09:37.471 7259.372 - 7309.785: 93.8579% ( 94) 00:09:37.471 7309.785 - 7360.197: 94.2434% ( 75) 00:09:37.471 7360.197 - 7410.609: 94.6597% ( 81) 00:09:37.471 7410.609 - 7461.022: 95.0041% ( 67) 00:09:37.471 7461.022 - 7511.434: 95.2611% ( 50) 00:09:37.471 7511.434 - 7561.846: 95.6209% ( 70) 00:09:37.471 7561.846 - 7612.258: 95.7802% ( 31) 00:09:37.471 7612.258 - 7662.671: 95.9139% ( 26) 00:09:37.471 7662.671 - 7713.083: 96.0526% ( 27) 00:09:37.471 7713.083 - 7763.495: 96.1708% ( 23) 00:09:37.471 7763.495 - 7813.908: 96.2993% ( 25) 00:09:37.471 7813.908 - 7864.320: 96.4587% ( 31) 00:09:37.471 7864.320 - 7914.732: 96.6180% ( 31) 00:09:37.471 7914.732 - 7965.145: 96.8185% ( 39) 00:09:37.471 7965.145 - 8015.557: 97.0549% ( 46) 00:09:37.471 8015.557 - 8065.969: 97.2296% ( 34) 00:09:37.471 8065.969 - 8116.382: 97.4095% ( 35) 00:09:37.471 8116.382 - 8166.794: 97.6100% ( 39) 00:09:37.471 8166.794 - 8217.206: 97.8104% ( 39) 00:09:37.471 8217.206 - 8267.618: 98.0366% ( 44) 00:09:37.471 8267.618 - 8318.031: 98.1394% ( 20) 00:09:37.471 8318.031 - 8368.443: 98.2216% ( 16) 00:09:37.472 8368.443 - 8418.855: 98.2833% ( 12) 00:09:37.472 8418.855 - 8469.268: 98.3347% ( 10) 00:09:37.472 8469.268 - 8519.680: 98.3655% ( 6) 00:09:37.472 8519.680 - 8570.092: 98.4067% ( 8) 00:09:37.472 8570.092 - 8620.505: 98.4426% ( 7) 00:09:37.472 8620.505 - 8670.917: 98.4632% ( 4) 00:09:37.472 8670.917 - 8721.329: 98.4889% ( 5) 00:09:37.472 8721.329 - 8771.742: 98.5197% ( 6) 00:09:37.472 8771.742 - 8822.154: 98.5403% ( 4) 00:09:37.472 8822.154 - 8872.566: 98.5506% ( 2) 00:09:37.472 8872.566 - 8922.978: 98.5609% ( 2) 00:09:37.472 8922.978 - 8973.391: 98.5711% ( 2) 00:09:37.472 8973.391 - 9023.803: 98.5814% ( 2) 00:09:37.472 9023.803 - 9074.215: 98.5968% ( 3) 00:09:37.472 9074.215 - 9124.628: 98.6071% ( 2) 00:09:37.472 9124.628 - 9175.040: 98.6174% ( 2) 00:09:37.472 9175.040 - 9225.452: 98.6277% ( 2) 00:09:37.472 9225.452 - 9275.865: 98.6431% ( 3) 00:09:37.472 9275.865 - 9326.277: 98.6534% ( 2) 00:09:37.472 9326.277 - 9376.689: 98.6637% ( 2) 00:09:37.472 9376.689 - 9427.102: 98.6791% ( 3) 00:09:37.472 9427.102 - 9477.514: 98.6842% ( 1) 00:09:37.472 10082.462 - 10132.874: 98.7202% ( 7) 00:09:37.472 10132.874 - 10183.286: 98.7510% ( 6) 00:09:37.472 10183.286 - 10233.698: 98.7870% ( 7) 00:09:37.472 10233.698 - 10284.111: 98.8178% ( 6) 00:09:37.472 10284.111 - 10334.523: 98.8487% ( 6) 00:09:37.472 10334.523 - 10384.935: 98.9977% ( 29) 00:09:37.472 10384.935 - 10435.348: 99.1211% ( 24) 00:09:37.472 10435.348 - 10485.760: 99.1674% ( 9) 00:09:37.472 10485.760 - 10536.172: 99.1828% ( 3) 00:09:37.472 10536.172 - 10586.585: 99.1982% ( 3) 00:09:37.472 10586.585 - 10636.997: 99.2136% ( 3) 00:09:37.472 10636.997 - 10687.409: 99.2290% ( 3) 00:09:37.472 10687.409 - 10737.822: 99.2444% ( 3) 00:09:37.472 10737.822 - 10788.234: 99.2599% ( 3) 00:09:37.472 10788.234 - 10838.646: 99.2753% ( 3) 00:09:37.472 10838.646 - 10889.058: 99.2958% ( 4) 00:09:37.472 10889.058 - 10939.471: 99.3113% ( 3) 00:09:37.472 10939.471 - 10989.883: 99.3318% ( 4) 00:09:37.472 10989.883 - 11040.295: 99.3421% ( 2) 00:09:37.472 22181.415 - 22282.240: 99.3575% ( 3) 00:09:37.472 22282.240 - 22383.065: 99.3781% ( 4) 00:09:37.472 22383.065 - 22483.889: 99.4038% ( 5) 00:09:37.472 22483.889 - 22584.714: 99.4192% ( 3) 00:09:37.472 22584.714 - 22685.538: 99.4449% ( 5) 00:09:37.472 22685.538 - 22786.363: 99.4655% ( 4) 00:09:37.472 22786.363 - 22887.188: 99.4860% ( 4) 00:09:37.472 22887.188 - 22988.012: 99.5066% ( 4) 00:09:37.472 22988.012 - 23088.837: 99.5323% ( 5) 00:09:37.472 23088.837 - 23189.662: 99.5528% ( 4) 00:09:37.472 23189.662 - 23290.486: 99.5683% ( 3) 00:09:37.472 23290.486 - 23391.311: 99.5888% ( 4) 00:09:37.472 23391.311 - 23492.135: 99.6094% ( 4) 00:09:37.472 23492.135 - 23592.960: 99.6299% ( 4) 00:09:37.472 23592.960 - 23693.785: 99.6505% ( 4) 00:09:37.472 23693.785 - 23794.609: 99.6711% ( 4) 00:09:37.472 23794.609 - 23895.434: 99.6916% ( 4) 00:09:37.472 23895.434 - 23996.258: 99.7122% ( 4) 00:09:37.472 23996.258 - 24097.083: 99.7327% ( 4) 00:09:37.472 24097.083 - 24197.908: 99.7481% ( 3) 00:09:37.472 24197.908 - 24298.732: 99.7687% ( 4) 00:09:37.472 24298.732 - 24399.557: 99.7893% ( 4) 00:09:37.472 24399.557 - 24500.382: 99.8098% ( 4) 00:09:37.472 24500.382 - 24601.206: 99.8304% ( 4) 00:09:37.472 24601.206 - 24702.031: 99.8509% ( 4) 00:09:37.472 24702.031 - 24802.855: 99.8715% ( 4) 00:09:37.472 24802.855 - 24903.680: 99.8921% ( 4) 00:09:37.472 24903.680 - 25004.505: 99.9075% ( 3) 00:09:37.472 25004.505 - 25105.329: 99.9280% ( 4) 00:09:37.472 25105.329 - 25206.154: 99.9537% ( 5) 00:09:37.472 25206.154 - 25306.978: 99.9743% ( 4) 00:09:37.472 25306.978 - 25407.803: 99.9949% ( 4) 00:09:37.472 25407.803 - 25508.628: 100.0000% ( 1) 00:09:37.472 00:09:37.472 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:37.472 ============================================================================== 00:09:37.472 Range in us Cumulative IO count 00:09:37.472 4915.200 - 4940.406: 0.0051% ( 1) 00:09:37.472 4965.612 - 4990.818: 0.0103% ( 1) 00:09:37.472 5041.231 - 5066.437: 0.0154% ( 1) 00:09:37.472 5116.849 - 5142.055: 0.0206% ( 1) 00:09:37.472 5167.262 - 5192.468: 0.0257% ( 1) 00:09:37.472 5192.468 - 5217.674: 0.0411% ( 3) 00:09:37.472 5217.674 - 5242.880: 0.0668% ( 5) 00:09:37.472 5242.880 - 5268.086: 0.0822% ( 3) 00:09:37.472 5268.086 - 5293.292: 0.1079% ( 5) 00:09:37.472 5293.292 - 5318.498: 0.1439% ( 7) 00:09:37.472 5318.498 - 5343.705: 0.1953% ( 10) 00:09:37.472 5343.705 - 5368.911: 0.2570% ( 12) 00:09:37.472 5368.911 - 5394.117: 0.3084% ( 10) 00:09:37.472 5394.117 - 5419.323: 0.3444% ( 7) 00:09:37.472 5419.323 - 5444.529: 0.4317% ( 17) 00:09:37.472 5444.529 - 5469.735: 0.5088% ( 15) 00:09:37.472 5469.735 - 5494.942: 0.6219% ( 22) 00:09:37.472 5494.942 - 5520.148: 0.7607% ( 27) 00:09:37.472 5520.148 - 5545.354: 0.9817% ( 43) 00:09:37.472 5545.354 - 5570.560: 1.2336% ( 49) 00:09:37.472 5570.560 - 5595.766: 1.4340% ( 39) 00:09:37.472 5595.766 - 5620.972: 1.6345% ( 39) 00:09:37.472 5620.972 - 5646.178: 1.8195% ( 36) 00:09:37.472 5646.178 - 5671.385: 2.0816% ( 51) 00:09:37.472 5671.385 - 5696.591: 2.3592% ( 54) 00:09:37.472 5696.591 - 5721.797: 2.6984% ( 66) 00:09:37.472 5721.797 - 5747.003: 3.1301% ( 84) 00:09:37.472 5747.003 - 5772.209: 3.7366% ( 118) 00:09:37.472 5772.209 - 5797.415: 4.4562% ( 140) 00:09:37.472 5797.415 - 5822.622: 5.2991% ( 164) 00:09:37.472 5822.622 - 5847.828: 6.3271% ( 200) 00:09:37.472 5847.828 - 5873.034: 7.4322% ( 215) 00:09:37.472 5873.034 - 5898.240: 8.7222% ( 251) 00:09:37.472 5898.240 - 5923.446: 9.9866% ( 246) 00:09:37.472 5923.446 - 5948.652: 11.4361% ( 282) 00:09:37.472 5948.652 - 5973.858: 12.9934% ( 303) 00:09:37.472 5973.858 - 5999.065: 14.7101% ( 334) 00:09:37.472 5999.065 - 6024.271: 16.4525% ( 339) 00:09:37.472 6024.271 - 6049.477: 18.2360% ( 347) 00:09:37.472 6049.477 - 6074.683: 19.9322% ( 330) 00:09:37.472 6074.683 - 6099.889: 21.6231% ( 329) 00:09:37.472 6099.889 - 6125.095: 23.6020% ( 385) 00:09:37.472 6125.095 - 6150.302: 25.9149% ( 450) 00:09:37.472 6150.302 - 6175.508: 28.3049% ( 465) 00:09:37.472 6175.508 - 6200.714: 30.4636% ( 420) 00:09:37.472 6200.714 - 6225.920: 33.0541% ( 504) 00:09:37.472 6225.920 - 6251.126: 35.9992% ( 573) 00:09:37.472 6251.126 - 6276.332: 38.8569% ( 556) 00:09:37.472 6276.332 - 6301.538: 41.3240% ( 480) 00:09:37.472 6301.538 - 6326.745: 43.7346% ( 469) 00:09:37.472 6326.745 - 6351.951: 46.5820% ( 554) 00:09:37.472 6351.951 - 6377.157: 49.1622% ( 502) 00:09:37.472 6377.157 - 6402.363: 51.5676% ( 468) 00:09:37.472 6402.363 - 6427.569: 53.6801% ( 411) 00:09:37.472 6427.569 - 6452.775: 55.8697% ( 426) 00:09:37.472 6452.775 - 6503.188: 60.1768% ( 838) 00:09:37.472 6503.188 - 6553.600: 64.2064% ( 784) 00:09:37.472 6553.600 - 6604.012: 67.0590% ( 555) 00:09:37.472 6604.012 - 6654.425: 69.9630% ( 565) 00:09:37.472 6654.425 - 6704.837: 73.3296% ( 655) 00:09:37.472 6704.837 - 6755.249: 76.1359% ( 546) 00:09:37.472 6755.249 - 6805.662: 78.8806% ( 534) 00:09:37.472 6805.662 - 6856.074: 81.9799% ( 603) 00:09:37.472 6856.074 - 6906.486: 84.2516% ( 442) 00:09:37.472 6906.486 - 6956.898: 86.2459% ( 388) 00:09:37.472 6956.898 - 7007.311: 88.3686% ( 413) 00:09:37.472 7007.311 - 7057.723: 90.0134% ( 320) 00:09:37.472 7057.723 - 7108.135: 91.2675% ( 244) 00:09:37.472 7108.135 - 7158.548: 92.4496% ( 230) 00:09:37.472 7158.548 - 7208.960: 93.2617% ( 158) 00:09:37.472 7208.960 - 7259.372: 93.9350% ( 131) 00:09:37.472 7259.372 - 7309.785: 94.4490% ( 100) 00:09:37.472 7309.785 - 7360.197: 94.9013% ( 88) 00:09:37.472 7360.197 - 7410.609: 95.2919% ( 76) 00:09:37.472 7410.609 - 7461.022: 95.6980% ( 79) 00:09:37.472 7461.022 - 7511.434: 95.9396% ( 47) 00:09:37.472 7511.434 - 7561.846: 96.2788% ( 66) 00:09:37.472 7561.846 - 7612.258: 96.4176% ( 27) 00:09:37.472 7612.258 - 7662.671: 96.5718% ( 30) 00:09:37.472 7662.671 - 7713.083: 96.7105% ( 27) 00:09:37.472 7713.083 - 7763.495: 96.8082% ( 19) 00:09:37.472 7763.495 - 7813.908: 96.9058% ( 19) 00:09:37.472 7813.908 - 7864.320: 96.9984% ( 18) 00:09:37.472 7864.320 - 7914.732: 97.0909% ( 18) 00:09:37.472 7914.732 - 7965.145: 97.1988% ( 21) 00:09:37.472 7965.145 - 8015.557: 97.3324% ( 26) 00:09:37.472 8015.557 - 8065.969: 97.4352% ( 20) 00:09:37.472 8065.969 - 8116.382: 97.5535% ( 23) 00:09:37.472 8116.382 - 8166.794: 97.7025% ( 29) 00:09:37.472 8166.794 - 8217.206: 97.7693% ( 13) 00:09:37.472 8217.206 - 8267.618: 97.8310% ( 12) 00:09:37.472 8267.618 - 8318.031: 97.8875% ( 11) 00:09:37.472 8318.031 - 8368.443: 97.9338% ( 9) 00:09:37.472 8368.443 - 8418.855: 97.9903% ( 11) 00:09:37.472 8418.855 - 8469.268: 98.0983% ( 21) 00:09:37.472 8469.268 - 8519.680: 98.2319% ( 26) 00:09:37.472 8519.680 - 8570.092: 98.3655% ( 26) 00:09:37.472 8570.092 - 8620.505: 98.4067% ( 8) 00:09:37.472 8620.505 - 8670.917: 98.4581% ( 10) 00:09:37.472 8670.917 - 8721.329: 98.5095% ( 10) 00:09:37.472 8721.329 - 8771.742: 98.5609% ( 10) 00:09:37.472 8771.742 - 8822.154: 98.5814% ( 4) 00:09:37.472 8822.154 - 8872.566: 98.6071% ( 5) 00:09:37.472 8872.566 - 8922.978: 98.6277% ( 4) 00:09:37.472 8922.978 - 8973.391: 98.6534% ( 5) 00:09:37.472 8973.391 - 9023.803: 98.6637% ( 2) 00:09:37.472 9023.803 - 9074.215: 98.6739% ( 2) 00:09:37.472 9074.215 - 9124.628: 98.6842% ( 2) 00:09:37.472 11040.295 - 11090.708: 98.6894% ( 1) 00:09:37.472 11090.708 - 11141.120: 98.7048% ( 3) 00:09:37.472 11141.120 - 11191.532: 98.7202% ( 3) 00:09:37.472 11191.532 - 11241.945: 98.7356% ( 3) 00:09:37.472 11241.945 - 11292.357: 98.7510% ( 3) 00:09:37.472 11292.357 - 11342.769: 98.7716% ( 4) 00:09:37.472 11342.769 - 11393.182: 98.7921% ( 4) 00:09:37.472 11393.182 - 11443.594: 98.8127% ( 4) 00:09:37.472 11443.594 - 11494.006: 98.8333% ( 4) 00:09:37.472 11494.006 - 11544.418: 98.8487% ( 3) 00:09:37.473 11544.418 - 11594.831: 98.8692% ( 4) 00:09:37.473 11594.831 - 11645.243: 98.8847% ( 3) 00:09:37.473 11645.243 - 11695.655: 98.9001% ( 3) 00:09:37.473 11695.655 - 11746.068: 98.9155% ( 3) 00:09:37.473 11746.068 - 11796.480: 98.9361% ( 4) 00:09:37.473 11796.480 - 11846.892: 98.9515% ( 3) 00:09:37.473 11846.892 - 11897.305: 98.9669% ( 3) 00:09:37.473 11897.305 - 11947.717: 98.9823% ( 3) 00:09:37.473 11947.717 - 11998.129: 98.9977% ( 3) 00:09:37.473 11998.129 - 12048.542: 99.0183% ( 4) 00:09:37.473 12048.542 - 12098.954: 99.0337% ( 3) 00:09:37.473 12098.954 - 12149.366: 99.0491% ( 3) 00:09:37.473 12149.366 - 12199.778: 99.0646% ( 3) 00:09:37.473 12199.778 - 12250.191: 99.0800% ( 3) 00:09:37.473 12250.191 - 12300.603: 99.0954% ( 3) 00:09:37.473 12300.603 - 12351.015: 99.1108% ( 3) 00:09:37.473 12351.015 - 12401.428: 99.1262% ( 3) 00:09:37.473 12401.428 - 12451.840: 99.1417% ( 3) 00:09:37.473 12451.840 - 12502.252: 99.1571% ( 3) 00:09:37.473 12502.252 - 12552.665: 99.1674% ( 2) 00:09:37.473 12552.665 - 12603.077: 99.1828% ( 3) 00:09:37.473 12603.077 - 12653.489: 99.1982% ( 3) 00:09:37.473 12653.489 - 12703.902: 99.2136% ( 3) 00:09:37.473 12703.902 - 12754.314: 99.2290% ( 3) 00:09:37.473 12754.314 - 12804.726: 99.2393% ( 2) 00:09:37.473 12804.726 - 12855.138: 99.2547% ( 3) 00:09:37.473 12855.138 - 12905.551: 99.2701% ( 3) 00:09:37.473 12905.551 - 13006.375: 99.2958% ( 5) 00:09:37.473 13006.375 - 13107.200: 99.3267% ( 6) 00:09:37.473 13107.200 - 13208.025: 99.3421% ( 3) 00:09:37.473 20870.695 - 20971.520: 99.3472% ( 1) 00:09:37.473 20971.520 - 21072.345: 99.3627% ( 3) 00:09:37.473 21072.345 - 21173.169: 99.3832% ( 4) 00:09:37.473 21173.169 - 21273.994: 99.4038% ( 4) 00:09:37.473 21273.994 - 21374.818: 99.4243% ( 4) 00:09:37.473 21374.818 - 21475.643: 99.4449% ( 4) 00:09:37.473 21475.643 - 21576.468: 99.4655% ( 4) 00:09:37.473 21576.468 - 21677.292: 99.4860% ( 4) 00:09:37.473 21677.292 - 21778.117: 99.5066% ( 4) 00:09:37.473 21778.117 - 21878.942: 99.5271% ( 4) 00:09:37.473 21878.942 - 21979.766: 99.5477% ( 4) 00:09:37.473 21979.766 - 22080.591: 99.5683% ( 4) 00:09:37.473 22080.591 - 22181.415: 99.5888% ( 4) 00:09:37.473 22181.415 - 22282.240: 99.6042% ( 3) 00:09:37.473 22282.240 - 22383.065: 99.6248% ( 4) 00:09:37.473 22383.065 - 22483.889: 99.6454% ( 4) 00:09:37.473 22483.889 - 22584.714: 99.6659% ( 4) 00:09:37.473 22584.714 - 22685.538: 99.6865% ( 4) 00:09:37.473 22685.538 - 22786.363: 99.7070% ( 4) 00:09:37.473 22786.363 - 22887.188: 99.7225% ( 3) 00:09:37.473 22887.188 - 22988.012: 99.7430% ( 4) 00:09:37.473 22988.012 - 23088.837: 99.7636% ( 4) 00:09:37.473 23088.837 - 23189.662: 99.7841% ( 4) 00:09:37.473 23189.662 - 23290.486: 99.8047% ( 4) 00:09:37.473 23290.486 - 23391.311: 99.8252% ( 4) 00:09:37.473 23391.311 - 23492.135: 99.8458% ( 4) 00:09:37.473 23492.135 - 23592.960: 99.8664% ( 4) 00:09:37.473 23592.960 - 23693.785: 99.8869% ( 4) 00:09:37.473 23693.785 - 23794.609: 99.9075% ( 4) 00:09:37.473 23794.609 - 23895.434: 99.9280% ( 4) 00:09:37.473 23895.434 - 23996.258: 99.9486% ( 4) 00:09:37.473 23996.258 - 24097.083: 99.9692% ( 4) 00:09:37.473 24097.083 - 24197.908: 99.9897% ( 4) 00:09:37.473 24197.908 - 24298.732: 100.0000% ( 2) 00:09:37.473 00:09:37.473 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:37.473 ============================================================================== 00:09:37.473 Range in us Cumulative IO count 00:09:37.473 5142.055 - 5167.262: 0.0051% ( 1) 00:09:37.473 5167.262 - 5192.468: 0.0154% ( 2) 00:09:37.473 5217.674 - 5242.880: 0.0206% ( 1) 00:09:37.473 5242.880 - 5268.086: 0.0257% ( 1) 00:09:37.473 5268.086 - 5293.292: 0.0360% ( 2) 00:09:37.473 5293.292 - 5318.498: 0.0411% ( 1) 00:09:37.473 5318.498 - 5343.705: 0.0514% ( 2) 00:09:37.473 5343.705 - 5368.911: 0.0668% ( 3) 00:09:37.473 5368.911 - 5394.117: 0.0771% ( 2) 00:09:37.473 5419.323 - 5444.529: 0.1028% ( 5) 00:09:37.473 5444.529 - 5469.735: 0.1542% ( 10) 00:09:37.473 5469.735 - 5494.942: 0.1850% ( 6) 00:09:37.473 5494.942 - 5520.148: 0.2570% ( 14) 00:09:37.473 5520.148 - 5545.354: 0.3289% ( 14) 00:09:37.473 5545.354 - 5570.560: 0.4626% ( 26) 00:09:37.473 5570.560 - 5595.766: 0.5911% ( 25) 00:09:37.473 5595.766 - 5620.972: 0.7247% ( 26) 00:09:37.473 5620.972 - 5646.178: 0.8995% ( 34) 00:09:37.473 5646.178 - 5671.385: 1.1308% ( 45) 00:09:37.473 5671.385 - 5696.591: 1.3980% ( 52) 00:09:37.473 5696.591 - 5721.797: 1.7270% ( 64) 00:09:37.473 5721.797 - 5747.003: 2.1382% ( 80) 00:09:37.473 5747.003 - 5772.209: 2.5956% ( 89) 00:09:37.473 5772.209 - 5797.415: 3.1815% ( 114) 00:09:37.473 5797.415 - 5822.622: 3.9885% ( 157) 00:09:37.473 5822.622 - 5847.828: 4.9445% ( 186) 00:09:37.473 5847.828 - 5873.034: 5.9930% ( 204) 00:09:37.473 5873.034 - 5898.240: 7.1340% ( 222) 00:09:37.473 5898.240 - 5923.446: 8.3727% ( 241) 00:09:37.473 5923.446 - 5948.652: 9.6988% ( 258) 00:09:37.473 5948.652 - 5973.858: 11.2510% ( 302) 00:09:37.473 5973.858 - 5999.065: 12.7724% ( 296) 00:09:37.473 5999.065 - 6024.271: 14.6639% ( 368) 00:09:37.473 6024.271 - 6049.477: 16.4062% ( 339) 00:09:37.473 6049.477 - 6074.683: 18.4879% ( 405) 00:09:37.473 6074.683 - 6099.889: 20.4513% ( 382) 00:09:37.473 6099.889 - 6125.095: 22.5329% ( 405) 00:09:37.473 6125.095 - 6150.302: 24.6711% ( 416) 00:09:37.473 6150.302 - 6175.508: 26.8966% ( 433) 00:09:37.473 6175.508 - 6200.714: 29.4151% ( 490) 00:09:37.473 6200.714 - 6225.920: 32.2985% ( 561) 00:09:37.473 6225.920 - 6251.126: 35.2282% ( 570) 00:09:37.473 6251.126 - 6276.332: 38.4097% ( 619) 00:09:37.473 6276.332 - 6301.538: 41.6684% ( 634) 00:09:37.473 6301.538 - 6326.745: 44.6649% ( 583) 00:09:37.473 6326.745 - 6351.951: 47.0600% ( 466) 00:09:37.473 6351.951 - 6377.157: 49.6248% ( 499) 00:09:37.473 6377.157 - 6402.363: 52.2410% ( 509) 00:09:37.473 6402.363 - 6427.569: 54.6464% ( 468) 00:09:37.473 6427.569 - 6452.775: 56.8822% ( 435) 00:09:37.473 6452.775 - 6503.188: 60.5417% ( 712) 00:09:37.473 6503.188 - 6553.600: 64.4171% ( 754) 00:09:37.473 6553.600 - 6604.012: 67.5833% ( 616) 00:09:37.473 6604.012 - 6654.425: 70.6826% ( 603) 00:09:37.473 6654.425 - 6704.837: 74.6813% ( 778) 00:09:37.473 6704.837 - 6755.249: 77.7858% ( 604) 00:09:37.473 6755.249 - 6805.662: 80.7360% ( 574) 00:09:37.473 6805.662 - 6856.074: 83.1517% ( 470) 00:09:37.473 6856.074 - 6906.486: 85.3927% ( 436) 00:09:37.473 6906.486 - 6956.898: 87.5514% ( 420) 00:09:37.473 6956.898 - 7007.311: 89.0625% ( 294) 00:09:37.473 7007.311 - 7057.723: 90.3218% ( 245) 00:09:37.473 7057.723 - 7108.135: 91.5039% ( 230) 00:09:37.473 7108.135 - 7158.548: 92.9071% ( 273) 00:09:37.473 7158.548 - 7208.960: 93.6266% ( 140) 00:09:37.473 7208.960 - 7259.372: 94.2383% ( 119) 00:09:37.473 7259.372 - 7309.785: 94.9013% ( 129) 00:09:37.473 7309.785 - 7360.197: 95.3279% ( 83) 00:09:37.473 7360.197 - 7410.609: 95.6723% ( 67) 00:09:37.473 7410.609 - 7461.022: 95.9498% ( 54) 00:09:37.473 7461.022 - 7511.434: 96.1400% ( 37) 00:09:37.473 7511.434 - 7561.846: 96.3045% ( 32) 00:09:37.473 7561.846 - 7612.258: 96.4484% ( 28) 00:09:37.473 7612.258 - 7662.671: 96.6026% ( 30) 00:09:37.473 7662.671 - 7713.083: 96.7465% ( 28) 00:09:37.473 7713.083 - 7763.495: 96.9161% ( 33) 00:09:37.473 7763.495 - 7813.908: 97.0138% ( 19) 00:09:37.473 7813.908 - 7864.320: 97.0806% ( 13) 00:09:37.473 7864.320 - 7914.732: 97.1577% ( 15) 00:09:37.473 7914.732 - 7965.145: 97.2502% ( 18) 00:09:37.473 7965.145 - 8015.557: 97.3222% ( 14) 00:09:37.473 8015.557 - 8065.969: 97.3993% ( 15) 00:09:37.473 8065.969 - 8116.382: 97.4661% ( 13) 00:09:37.473 8116.382 - 8166.794: 97.5432% ( 15) 00:09:37.473 8166.794 - 8217.206: 97.6049% ( 12) 00:09:37.473 8217.206 - 8267.618: 97.6665% ( 12) 00:09:37.473 8267.618 - 8318.031: 97.7282% ( 12) 00:09:37.473 8318.031 - 8368.443: 97.7796% ( 10) 00:09:37.473 8368.443 - 8418.855: 97.8618% ( 16) 00:09:37.473 8418.855 - 8469.268: 97.9852% ( 24) 00:09:37.473 8469.268 - 8519.680: 98.0572% ( 14) 00:09:37.473 8519.680 - 8570.092: 98.1343% ( 15) 00:09:37.473 8570.092 - 8620.505: 98.2062% ( 14) 00:09:37.473 8620.505 - 8670.917: 98.3347% ( 25) 00:09:37.473 8670.917 - 8721.329: 98.4632% ( 25) 00:09:37.473 8721.329 - 8771.742: 98.5095% ( 9) 00:09:37.473 8771.742 - 8822.154: 98.5352% ( 5) 00:09:37.473 8822.154 - 8872.566: 98.5660% ( 6) 00:09:37.473 8872.566 - 8922.978: 98.5917% ( 5) 00:09:37.474 8922.978 - 8973.391: 98.6174% ( 5) 00:09:37.474 8973.391 - 9023.803: 98.6431% ( 5) 00:09:37.474 9023.803 - 9074.215: 98.6688% ( 5) 00:09:37.474 9074.215 - 9124.628: 98.6842% ( 3) 00:09:37.474 11241.945 - 11292.357: 98.6945% ( 2) 00:09:37.474 11292.357 - 11342.769: 98.7099% ( 3) 00:09:37.474 11342.769 - 11393.182: 98.7305% ( 4) 00:09:37.474 11393.182 - 11443.594: 98.7459% ( 3) 00:09:37.474 11443.594 - 11494.006: 98.7613% ( 3) 00:09:37.474 11494.006 - 11544.418: 98.7819% ( 4) 00:09:37.474 11544.418 - 11594.831: 98.7973% ( 3) 00:09:37.474 11594.831 - 11645.243: 98.8178% ( 4) 00:09:37.474 11645.243 - 11695.655: 98.8333% ( 3) 00:09:37.474 11695.655 - 11746.068: 98.8538% ( 4) 00:09:37.474 11746.068 - 11796.480: 98.8744% ( 4) 00:09:37.474 11796.480 - 11846.892: 98.8949% ( 4) 00:09:37.474 11846.892 - 11897.305: 98.9155% ( 4) 00:09:37.474 11897.305 - 11947.717: 98.9309% ( 3) 00:09:37.474 11947.717 - 11998.129: 98.9515% ( 4) 00:09:37.474 11998.129 - 12048.542: 98.9720% ( 4) 00:09:37.474 12048.542 - 12098.954: 98.9926% ( 4) 00:09:37.474 12098.954 - 12149.366: 99.0080% ( 3) 00:09:37.474 12149.366 - 12199.778: 99.0286% ( 4) 00:09:37.474 12199.778 - 12250.191: 99.0440% ( 3) 00:09:37.474 12250.191 - 12300.603: 99.0646% ( 4) 00:09:37.474 12300.603 - 12351.015: 99.0800% ( 3) 00:09:37.474 12351.015 - 12401.428: 99.1005% ( 4) 00:09:37.474 12401.428 - 12451.840: 99.1160% ( 3) 00:09:37.474 12451.840 - 12502.252: 99.1365% ( 4) 00:09:37.474 12502.252 - 12552.665: 99.1519% ( 3) 00:09:37.474 12552.665 - 12603.077: 99.1725% ( 4) 00:09:37.474 12603.077 - 12653.489: 99.1879% ( 3) 00:09:37.474 12653.489 - 12703.902: 99.2085% ( 4) 00:09:37.474 12703.902 - 12754.314: 99.2239% ( 3) 00:09:37.474 12754.314 - 12804.726: 99.2444% ( 4) 00:09:37.474 12804.726 - 12855.138: 99.2599% ( 3) 00:09:37.474 12855.138 - 12905.551: 99.2804% ( 4) 00:09:37.474 12905.551 - 13006.375: 99.3113% ( 6) 00:09:37.474 13006.375 - 13107.200: 99.3421% ( 6) 00:09:37.474 19660.800 - 19761.625: 99.3472% ( 1) 00:09:37.474 19761.625 - 19862.449: 99.3678% ( 4) 00:09:37.474 19862.449 - 19963.274: 99.3884% ( 4) 00:09:37.474 19963.274 - 20064.098: 99.4089% ( 4) 00:09:37.474 20064.098 - 20164.923: 99.4295% ( 4) 00:09:37.474 20164.923 - 20265.748: 99.4449% ( 3) 00:09:37.474 20265.748 - 20366.572: 99.4655% ( 4) 00:09:37.474 20366.572 - 20467.397: 99.4860% ( 4) 00:09:37.474 20467.397 - 20568.222: 99.5066% ( 4) 00:09:37.474 20568.222 - 20669.046: 99.5271% ( 4) 00:09:37.474 20669.046 - 20769.871: 99.5477% ( 4) 00:09:37.474 20769.871 - 20870.695: 99.5683% ( 4) 00:09:37.474 20870.695 - 20971.520: 99.5888% ( 4) 00:09:37.474 20971.520 - 21072.345: 99.6094% ( 4) 00:09:37.474 21072.345 - 21173.169: 99.6299% ( 4) 00:09:37.474 21173.169 - 21273.994: 99.6505% ( 4) 00:09:37.474 21273.994 - 21374.818: 99.6711% ( 4) 00:09:37.474 21374.818 - 21475.643: 99.6916% ( 4) 00:09:37.474 21475.643 - 21576.468: 99.7122% ( 4) 00:09:37.474 21576.468 - 21677.292: 99.7276% ( 3) 00:09:37.474 21677.292 - 21778.117: 99.7481% ( 4) 00:09:37.474 21778.117 - 21878.942: 99.7687% ( 4) 00:09:37.474 21878.942 - 21979.766: 99.7893% ( 4) 00:09:37.474 21979.766 - 22080.591: 99.8047% ( 3) 00:09:37.474 22080.591 - 22181.415: 99.8304% ( 5) 00:09:37.474 22181.415 - 22282.240: 99.8509% ( 4) 00:09:37.474 22282.240 - 22383.065: 99.8715% ( 4) 00:09:37.474 22383.065 - 22483.889: 99.8921% ( 4) 00:09:37.474 22483.889 - 22584.714: 99.9126% ( 4) 00:09:37.474 22584.714 - 22685.538: 99.9332% ( 4) 00:09:37.474 22685.538 - 22786.363: 99.9486% ( 3) 00:09:37.474 22786.363 - 22887.188: 99.9692% ( 4) 00:09:37.474 22887.188 - 22988.012: 99.9897% ( 4) 00:09:37.474 22988.012 - 23088.837: 100.0000% ( 2) 00:09:37.474 00:09:37.474 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:37.474 ============================================================================== 00:09:37.474 Range in us Cumulative IO count 00:09:37.474 5217.674 - 5242.880: 0.0051% ( 1) 00:09:37.474 5242.880 - 5268.086: 0.0102% ( 1) 00:09:37.474 5293.292 - 5318.498: 0.0204% ( 2) 00:09:37.474 5318.498 - 5343.705: 0.0255% ( 1) 00:09:37.474 5343.705 - 5368.911: 0.0408% ( 3) 00:09:37.474 5368.911 - 5394.117: 0.0766% ( 7) 00:09:37.474 5394.117 - 5419.323: 0.1072% ( 6) 00:09:37.474 5419.323 - 5444.529: 0.1430% ( 7) 00:09:37.474 5444.529 - 5469.735: 0.1889% ( 9) 00:09:37.474 5469.735 - 5494.942: 0.2349% ( 9) 00:09:37.474 5494.942 - 5520.148: 0.2757% ( 8) 00:09:37.474 5520.148 - 5545.354: 0.3166% ( 8) 00:09:37.474 5545.354 - 5570.560: 0.3983% ( 16) 00:09:37.474 5570.560 - 5595.766: 0.4851% ( 17) 00:09:37.474 5595.766 - 5620.972: 0.6179% ( 26) 00:09:37.474 5620.972 - 5646.178: 0.7761% ( 31) 00:09:37.474 5646.178 - 5671.385: 1.0468% ( 53) 00:09:37.474 5671.385 - 5696.591: 1.3123% ( 52) 00:09:37.474 5696.591 - 5721.797: 1.6595% ( 68) 00:09:37.474 5721.797 - 5747.003: 2.0476% ( 76) 00:09:37.474 5747.003 - 5772.209: 2.5991% ( 108) 00:09:37.474 5772.209 - 5797.415: 3.2578% ( 129) 00:09:37.474 5797.415 - 5822.622: 4.0748% ( 160) 00:09:37.474 5822.622 - 5847.828: 5.0449% ( 190) 00:09:37.474 5847.828 - 5873.034: 6.0866% ( 204) 00:09:37.474 5873.034 - 5898.240: 7.1895% ( 216) 00:09:37.474 5898.240 - 5923.446: 8.4406% ( 245) 00:09:37.474 5923.446 - 5948.652: 9.7682% ( 260) 00:09:37.474 5948.652 - 5973.858: 11.1417% ( 269) 00:09:37.474 5973.858 - 5999.065: 12.6277% ( 291) 00:09:37.474 5999.065 - 6024.271: 14.5884% ( 384) 00:09:37.474 6024.271 - 6049.477: 16.3756% ( 350) 00:09:37.474 6049.477 - 6074.683: 18.2598% ( 369) 00:09:37.474 6074.683 - 6099.889: 20.4606% ( 431) 00:09:37.474 6099.889 - 6125.095: 22.5082% ( 401) 00:09:37.474 6125.095 - 6150.302: 24.9898% ( 486) 00:09:37.474 6150.302 - 6175.508: 27.1548% ( 424) 00:09:37.474 6175.508 - 6200.714: 29.4577% ( 451) 00:09:37.474 6200.714 - 6225.920: 32.0057% ( 499) 00:09:37.474 6225.920 - 6251.126: 35.1971% ( 625) 00:09:37.474 6251.126 - 6276.332: 38.2302% ( 594) 00:09:37.474 6276.332 - 6301.538: 40.7935% ( 502) 00:09:37.474 6301.538 - 6326.745: 43.4794% ( 526) 00:09:37.474 6326.745 - 6351.951: 45.9201% ( 478) 00:09:37.474 6351.951 - 6377.157: 49.1983% ( 642) 00:09:37.474 6377.157 - 6402.363: 51.6033% ( 471) 00:09:37.474 6402.363 - 6427.569: 53.5590% ( 383) 00:09:37.474 6427.569 - 6452.775: 55.9079% ( 460) 00:09:37.474 6452.775 - 6503.188: 60.3503% ( 870) 00:09:37.474 6503.188 - 6553.600: 63.8429% ( 684) 00:09:37.474 6553.600 - 6604.012: 67.2998% ( 677) 00:09:37.474 6604.012 - 6654.425: 70.5678% ( 640) 00:09:37.474 6654.425 - 6704.837: 73.6826% ( 610) 00:09:37.474 6704.837 - 6755.249: 76.7463% ( 600) 00:09:37.474 6755.249 - 6805.662: 79.8458% ( 607) 00:09:37.474 6805.662 - 6856.074: 82.7053% ( 560) 00:09:37.474 6856.074 - 6906.486: 84.9009% ( 430) 00:09:37.474 6906.486 - 6956.898: 86.9230% ( 396) 00:09:37.474 6956.898 - 7007.311: 88.5621% ( 321) 00:09:37.474 7007.311 - 7057.723: 89.9050% ( 263) 00:09:37.474 7057.723 - 7108.135: 91.2633% ( 266) 00:09:37.474 7108.135 - 7158.548: 92.6113% ( 264) 00:09:37.474 7158.548 - 7208.960: 93.5713% ( 188) 00:09:37.474 7208.960 - 7259.372: 94.3015% ( 143) 00:09:37.474 7259.372 - 7309.785: 95.1134% ( 159) 00:09:37.474 7309.785 - 7360.197: 95.5423% ( 84) 00:09:37.474 7360.197 - 7410.609: 95.8640% ( 63) 00:09:37.474 7410.609 - 7461.022: 96.1142% ( 49) 00:09:37.474 7461.022 - 7511.434: 96.3031% ( 37) 00:09:37.474 7511.434 - 7561.846: 96.4614% ( 31) 00:09:37.474 7561.846 - 7612.258: 96.6095% ( 29) 00:09:37.474 7612.258 - 7662.671: 96.7320% ( 24) 00:09:37.474 7662.671 - 7713.083: 96.8495% ( 23) 00:09:37.474 7713.083 - 7763.495: 96.9669% ( 23) 00:09:37.474 7763.495 - 7813.908: 97.0537% ( 17) 00:09:37.474 7813.908 - 7864.320: 97.1558% ( 20) 00:09:37.474 7864.320 - 7914.732: 97.2375% ( 16) 00:09:37.474 7914.732 - 7965.145: 97.2886% ( 10) 00:09:37.474 7965.145 - 8015.557: 97.3499% ( 12) 00:09:37.474 8015.557 - 8065.969: 97.4009% ( 10) 00:09:37.474 8065.969 - 8116.382: 97.4418% ( 8) 00:09:37.474 8116.382 - 8166.794: 97.4929% ( 10) 00:09:37.474 8166.794 - 8217.206: 97.5286% ( 7) 00:09:37.474 8217.206 - 8267.618: 97.5643% ( 7) 00:09:37.474 8267.618 - 8318.031: 97.5950% ( 6) 00:09:37.474 8318.031 - 8368.443: 97.6358% ( 8) 00:09:37.474 8368.443 - 8418.855: 97.6971% ( 12) 00:09:37.474 8418.855 - 8469.268: 97.8094% ( 22) 00:09:37.474 8469.268 - 8519.680: 97.8758% ( 13) 00:09:37.474 8519.680 - 8570.092: 97.9116% ( 7) 00:09:37.474 8570.092 - 8620.505: 97.9524% ( 8) 00:09:37.474 8620.505 - 8670.917: 97.9830% ( 6) 00:09:37.474 8670.917 - 8721.329: 98.0086% ( 5) 00:09:37.474 8721.329 - 8771.742: 98.0392% ( 6) 00:09:37.474 8771.742 - 8822.154: 98.0647% ( 5) 00:09:37.474 8822.154 - 8872.566: 98.0954% ( 6) 00:09:37.474 8872.566 - 8922.978: 98.1260% ( 6) 00:09:37.474 8922.978 - 8973.391: 98.1669% ( 8) 00:09:37.474 8973.391 - 9023.803: 98.2435% ( 15) 00:09:37.474 9023.803 - 9074.215: 98.3456% ( 20) 00:09:37.474 9074.215 - 9124.628: 98.4222% ( 15) 00:09:37.474 9124.628 - 9175.040: 98.5039% ( 16) 00:09:37.474 9175.040 - 9225.452: 98.5703% ( 13) 00:09:37.474 9225.452 - 9275.865: 98.6060% ( 7) 00:09:37.474 9275.865 - 9326.277: 98.6264% ( 4) 00:09:37.474 9326.277 - 9376.689: 98.6520% ( 5) 00:09:37.474 9376.689 - 9427.102: 98.6724% ( 4) 00:09:37.474 9427.102 - 9477.514: 98.6877% ( 3) 00:09:37.474 9477.514 - 9527.926: 98.6928% ( 1) 00:09:37.474 10384.935 - 10435.348: 98.6979% ( 1) 00:09:37.474 10435.348 - 10485.760: 98.7183% ( 4) 00:09:37.474 10485.760 - 10536.172: 98.7388% ( 4) 00:09:37.474 10536.172 - 10586.585: 98.7592% ( 4) 00:09:37.474 10586.585 - 10636.997: 98.7796% ( 4) 00:09:37.474 10636.997 - 10687.409: 98.7949% ( 3) 00:09:37.474 10687.409 - 10737.822: 98.8154% ( 4) 00:09:37.475 10737.822 - 10788.234: 98.8307% ( 3) 00:09:37.475 10788.234 - 10838.646: 98.8511% ( 4) 00:09:37.475 10838.646 - 10889.058: 98.8664% ( 3) 00:09:37.475 10889.058 - 10939.471: 98.8868% ( 4) 00:09:37.475 10939.471 - 10989.883: 98.9022% ( 3) 00:09:37.475 10989.883 - 11040.295: 98.9226% ( 4) 00:09:37.475 11040.295 - 11090.708: 98.9430% ( 4) 00:09:37.475 11090.708 - 11141.120: 98.9634% ( 4) 00:09:37.475 11141.120 - 11191.532: 98.9839% ( 4) 00:09:37.475 11191.532 - 11241.945: 98.9992% ( 3) 00:09:37.475 11241.945 - 11292.357: 99.0196% ( 4) 00:09:37.475 11292.357 - 11342.769: 99.0349% ( 3) 00:09:37.475 11342.769 - 11393.182: 99.0502% ( 3) 00:09:37.475 11393.182 - 11443.594: 99.0656% ( 3) 00:09:37.475 11443.594 - 11494.006: 99.0860% ( 4) 00:09:37.475 11494.006 - 11544.418: 99.1013% ( 3) 00:09:37.475 11544.418 - 11594.831: 99.1166% ( 3) 00:09:37.475 11594.831 - 11645.243: 99.1371% ( 4) 00:09:37.475 11645.243 - 11695.655: 99.1524% ( 3) 00:09:37.475 11695.655 - 11746.068: 99.1728% ( 4) 00:09:37.475 11746.068 - 11796.480: 99.1881% ( 3) 00:09:37.475 11796.480 - 11846.892: 99.2034% ( 3) 00:09:37.475 11846.892 - 11897.305: 99.2239% ( 4) 00:09:37.475 11897.305 - 11947.717: 99.2392% ( 3) 00:09:37.475 11947.717 - 11998.129: 99.2596% ( 4) 00:09:37.475 11998.129 - 12048.542: 99.2749% ( 3) 00:09:37.475 12048.542 - 12098.954: 99.2953% ( 4) 00:09:37.475 12098.954 - 12149.366: 99.3107% ( 3) 00:09:37.475 12149.366 - 12199.778: 99.3260% ( 3) 00:09:37.475 12199.778 - 12250.191: 99.3566% ( 6) 00:09:37.475 12250.191 - 12300.603: 99.3668% ( 2) 00:09:37.475 12300.603 - 12351.015: 99.3821% ( 3) 00:09:37.475 12351.015 - 12401.428: 99.3924% ( 2) 00:09:37.475 12401.428 - 12451.840: 99.4026% ( 2) 00:09:37.475 12451.840 - 12502.252: 99.4128% ( 2) 00:09:37.475 12502.252 - 12552.665: 99.4230% ( 2) 00:09:37.475 12552.665 - 12603.077: 99.4332% ( 2) 00:09:37.475 12603.077 - 12653.489: 99.4434% ( 2) 00:09:37.475 12653.489 - 12703.902: 99.4536% ( 2) 00:09:37.475 12703.902 - 12754.314: 99.4638% ( 2) 00:09:37.475 12754.314 - 12804.726: 99.4741% ( 2) 00:09:37.475 12804.726 - 12855.138: 99.4843% ( 2) 00:09:37.475 12855.138 - 12905.551: 99.4945% ( 2) 00:09:37.475 12905.551 - 13006.375: 99.5149% ( 4) 00:09:37.475 13006.375 - 13107.200: 99.5353% ( 4) 00:09:37.475 13107.200 - 13208.025: 99.5558% ( 4) 00:09:37.475 13208.025 - 13308.849: 99.5762% ( 4) 00:09:37.475 13308.849 - 13409.674: 99.5966% ( 4) 00:09:37.475 13409.674 - 13510.498: 99.6221% ( 5) 00:09:37.475 13510.498 - 13611.323: 99.6426% ( 4) 00:09:37.475 13611.323 - 13712.148: 99.6630% ( 4) 00:09:37.475 13712.148 - 13812.972: 99.6885% ( 5) 00:09:37.475 13812.972 - 13913.797: 99.7089% ( 4) 00:09:37.475 13913.797 - 14014.622: 99.7294% ( 4) 00:09:37.475 14014.622 - 14115.446: 99.7498% ( 4) 00:09:37.475 14115.446 - 14216.271: 99.7702% ( 4) 00:09:37.475 14216.271 - 14317.095: 99.7906% ( 4) 00:09:37.475 14317.095 - 14417.920: 99.8111% ( 4) 00:09:37.475 14417.920 - 14518.745: 99.8315% ( 4) 00:09:37.475 14518.745 - 14619.569: 99.8519% ( 4) 00:09:37.475 14619.569 - 14720.394: 99.8775% ( 5) 00:09:37.475 14720.394 - 14821.218: 99.8979% ( 4) 00:09:37.475 14821.218 - 14922.043: 99.9183% ( 4) 00:09:37.475 14922.043 - 15022.868: 99.9387% ( 4) 00:09:37.475 15022.868 - 15123.692: 99.9643% ( 5) 00:09:37.475 15123.692 - 15224.517: 99.9847% ( 4) 00:09:37.475 15224.517 - 15325.342: 100.0000% ( 3) 00:09:37.475 00:09:37.475 20:02:44 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:37.475 00:09:37.475 real 0m2.613s 00:09:37.475 user 0m2.300s 00:09:37.475 sys 0m0.207s 00:09:37.475 20:02:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.475 20:02:44 -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 ************************************ 00:09:37.475 END TEST nvme_perf 00:09:37.475 ************************************ 00:09:37.475 20:02:44 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:37.475 20:02:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:37.475 20:02:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.475 20:02:44 -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 ************************************ 00:09:37.475 START TEST nvme_hello_world 00:09:37.475 ************************************ 00:09:37.475 20:02:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:37.475 Initializing NVMe Controllers 00:09:37.475 Attached to 0000:00:09.0 00:09:37.475 Namespace ID: 1 size: 1GB 00:09:37.475 Attached to 0000:00:06.0 00:09:37.475 Namespace ID: 1 size: 6GB 00:09:37.475 Attached to 0000:00:07.0 00:09:37.475 Namespace ID: 1 size: 5GB 00:09:37.475 Attached to 0000:00:08.0 00:09:37.475 Namespace ID: 1 size: 4GB 00:09:37.475 Namespace ID: 2 size: 4GB 00:09:37.475 Namespace ID: 3 size: 4GB 00:09:37.475 Initialization complete. 00:09:37.475 INFO: using host memory buffer for IO 00:09:37.475 Hello world! 00:09:37.475 INFO: using host memory buffer for IO 00:09:37.475 Hello world! 00:09:37.475 INFO: using host memory buffer for IO 00:09:37.475 Hello world! 00:09:37.475 INFO: using host memory buffer for IO 00:09:37.475 Hello world! 00:09:37.475 INFO: using host memory buffer for IO 00:09:37.475 Hello world! 00:09:37.475 INFO: using host memory buffer for IO 00:09:37.475 Hello world! 00:09:37.475 00:09:37.475 real 0m0.256s 00:09:37.475 user 0m0.118s 00:09:37.475 sys 0m0.091s 00:09:37.475 20:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.475 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 ************************************ 00:09:37.475 END TEST nvme_hello_world 00:09:37.475 ************************************ 00:09:37.475 20:02:45 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:37.475 20:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.475 20:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.475 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 ************************************ 00:09:37.475 START TEST nvme_sgl 00:09:37.475 ************************************ 00:09:37.475 20:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:37.733 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:37.733 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:37.733 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:37.733 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:37.733 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:37.733 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:37.733 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:37.733 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:37.733 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:37.733 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:37.991 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:37.991 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:37.991 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:37.991 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:37.991 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:37.991 NVMe Readv/Writev Request test 00:09:37.991 Attached to 0000:00:09.0 00:09:37.991 Attached to 0000:00:06.0 00:09:37.991 Attached to 0000:00:07.0 00:09:37.991 Attached to 0000:00:08.0 00:09:37.991 0000:00:06.0: build_io_request_2 test passed 00:09:37.991 0000:00:06.0: build_io_request_4 test passed 00:09:37.991 0000:00:06.0: build_io_request_5 test passed 00:09:37.991 0000:00:06.0: build_io_request_6 test passed 00:09:37.991 0000:00:06.0: build_io_request_7 test passed 00:09:37.991 0000:00:06.0: build_io_request_10 test passed 00:09:37.991 0000:00:07.0: build_io_request_2 test passed 00:09:37.991 0000:00:07.0: build_io_request_4 test passed 00:09:37.991 0000:00:07.0: build_io_request_5 test passed 00:09:37.991 0000:00:07.0: build_io_request_6 test passed 00:09:37.991 0000:00:07.0: build_io_request_7 test passed 00:09:37.991 0000:00:07.0: build_io_request_10 test passed 00:09:37.991 Cleaning up... 00:09:37.991 ************************************ 00:09:37.991 END TEST nvme_sgl 00:09:37.991 ************************************ 00:09:37.991 00:09:37.991 real 0m0.374s 00:09:37.992 user 0m0.228s 00:09:37.992 sys 0m0.096s 00:09:37.992 20:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.992 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 20:02:45 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:37.992 20:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.992 20:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.992 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 ************************************ 00:09:37.992 START TEST nvme_e2edp 00:09:37.992 ************************************ 00:09:37.992 20:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:38.250 NVMe Write/Read with End-to-End data protection test 00:09:38.250 Attached to 0000:00:09.0 00:09:38.250 Attached to 0000:00:06.0 00:09:38.250 Attached to 0000:00:07.0 00:09:38.250 Attached to 0000:00:08.0 00:09:38.250 Cleaning up... 00:09:38.250 00:09:38.250 real 0m0.196s 00:09:38.250 user 0m0.060s 00:09:38.250 sys 0m0.088s 00:09:38.250 ************************************ 00:09:38.250 END TEST nvme_e2edp 00:09:38.250 ************************************ 00:09:38.250 20:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.250 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.250 20:02:45 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:38.250 20:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.250 20:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.250 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.250 ************************************ 00:09:38.250 START TEST nvme_reserve 00:09:38.250 ************************************ 00:09:38.250 20:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:38.508 ===================================================== 00:09:38.508 NVMe Controller at PCI bus 0, device 9, function 0 00:09:38.508 ===================================================== 00:09:38.508 Reservations: Not Supported 00:09:38.508 ===================================================== 00:09:38.508 NVMe Controller at PCI bus 0, device 6, function 0 00:09:38.508 ===================================================== 00:09:38.508 Reservations: Not Supported 00:09:38.508 ===================================================== 00:09:38.508 NVMe Controller at PCI bus 0, device 7, function 0 00:09:38.508 ===================================================== 00:09:38.509 Reservations: Not Supported 00:09:38.509 ===================================================== 00:09:38.509 NVMe Controller at PCI bus 0, device 8, function 0 00:09:38.509 ===================================================== 00:09:38.509 Reservations: Not Supported 00:09:38.509 Reservation test passed 00:09:38.509 00:09:38.509 real 0m0.190s 00:09:38.509 user 0m0.072s 00:09:38.509 sys 0m0.078s 00:09:38.509 20:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.509 ************************************ 00:09:38.509 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.509 END TEST nvme_reserve 00:09:38.509 ************************************ 00:09:38.509 20:02:45 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:38.509 20:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.509 20:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.509 20:02:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.509 ************************************ 00:09:38.509 START TEST nvme_err_injection 00:09:38.509 ************************************ 00:09:38.509 20:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:38.767 NVMe Error Injection test 00:09:38.767 Attached to 0000:00:09.0 00:09:38.767 Attached to 0000:00:06.0 00:09:38.767 Attached to 0000:00:07.0 00:09:38.767 Attached to 0000:00:08.0 00:09:38.767 0000:00:09.0: get features failed as expected 00:09:38.767 0000:00:06.0: get features failed as expected 00:09:38.767 0000:00:07.0: get features failed as expected 00:09:38.767 0000:00:08.0: get features failed as expected 00:09:38.767 0000:00:09.0: get features successfully as expected 00:09:38.767 0000:00:06.0: get features successfully as expected 00:09:38.767 0000:00:07.0: get features successfully as expected 00:09:38.767 0000:00:08.0: get features successfully as expected 00:09:38.767 0000:00:08.0: read failed as expected 00:09:38.767 0000:00:09.0: read failed as expected 00:09:38.767 0000:00:06.0: read failed as expected 00:09:38.767 0000:00:07.0: read failed as expected 00:09:38.767 0000:00:09.0: read successfully as expected 00:09:38.767 0000:00:06.0: read successfully as expected 00:09:38.767 0000:00:07.0: read successfully as expected 00:09:38.767 0000:00:08.0: read successfully as expected 00:09:38.767 Cleaning up... 00:09:38.767 ************************************ 00:09:38.767 END TEST nvme_err_injection 00:09:38.767 ************************************ 00:09:38.767 00:09:38.767 real 0m0.234s 00:09:38.767 user 0m0.097s 00:09:38.767 sys 0m0.093s 00:09:38.767 20:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.767 20:02:46 -- common/autotest_common.sh@10 -- # set +x 00:09:38.767 20:02:46 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:38.767 20:02:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:38.767 20:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.767 20:02:46 -- common/autotest_common.sh@10 -- # set +x 00:09:38.767 ************************************ 00:09:38.767 START TEST nvme_overhead 00:09:38.767 ************************************ 00:09:38.767 20:02:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:40.143 Initializing NVMe Controllers 00:09:40.143 Attached to 0000:00:09.0 00:09:40.143 Attached to 0000:00:06.0 00:09:40.143 Attached to 0000:00:07.0 00:09:40.143 Attached to 0000:00:08.0 00:09:40.143 Initialization complete. Launching workers. 00:09:40.143 submit (in ns) avg, min, max = 11426.0, 10018.5, 297313.8 00:09:40.143 complete (in ns) avg, min, max = 7652.5, 7204.6, 67524.6 00:09:40.143 00:09:40.143 Submit histogram 00:09:40.143 ================ 00:09:40.143 Range in us Cumulative Count 00:09:40.143 9.994 - 10.043: 0.0057% ( 1) 00:09:40.143 10.437 - 10.486: 0.0113% ( 1) 00:09:40.143 10.732 - 10.782: 0.0622% ( 9) 00:09:40.143 10.782 - 10.831: 0.7404% ( 120) 00:09:40.143 10.831 - 10.880: 3.5101% ( 490) 00:09:40.143 10.880 - 10.929: 10.1628% ( 1177) 00:09:40.143 10.929 - 10.978: 20.4160% ( 1814) 00:09:40.143 10.978 - 11.028: 32.9810% ( 2223) 00:09:40.143 11.028 - 11.077: 45.8569% ( 2278) 00:09:40.143 11.077 - 11.126: 57.9980% ( 2148) 00:09:40.143 11.126 - 11.175: 68.1890% ( 1803) 00:09:40.143 11.175 - 11.225: 75.2204% ( 1244) 00:09:40.143 11.225 - 11.274: 79.7253% ( 797) 00:09:40.143 11.274 - 11.323: 82.1558% ( 430) 00:09:40.143 11.323 - 11.372: 83.8175% ( 294) 00:09:40.143 11.372 - 11.422: 84.8123% ( 176) 00:09:40.143 11.422 - 11.471: 85.6263% ( 144) 00:09:40.143 11.471 - 11.520: 86.3554% ( 129) 00:09:40.143 11.520 - 11.569: 86.9489% ( 105) 00:09:40.143 11.569 - 11.618: 87.5085% ( 99) 00:09:40.143 11.618 - 11.668: 88.1641% ( 116) 00:09:40.143 11.668 - 11.717: 88.8142% ( 115) 00:09:40.143 11.717 - 11.766: 89.6224% ( 143) 00:09:40.143 11.766 - 11.815: 90.4816% ( 152) 00:09:40.143 11.815 - 11.865: 91.1542% ( 119) 00:09:40.143 11.865 - 11.914: 91.8946% ( 131) 00:09:40.143 11.914 - 11.963: 92.5673% ( 119) 00:09:40.143 11.963 - 12.012: 93.1042% ( 95) 00:09:40.143 12.012 - 12.062: 93.6977% ( 105) 00:09:40.143 12.062 - 12.111: 94.1612% ( 82) 00:09:40.143 12.111 - 12.160: 94.5738% ( 73) 00:09:40.143 12.160 - 12.209: 94.9412% ( 65) 00:09:40.143 12.209 - 12.258: 95.2804% ( 60) 00:09:40.143 12.258 - 12.308: 95.4556% ( 31) 00:09:40.143 12.308 - 12.357: 95.6082% ( 27) 00:09:40.143 12.357 - 12.406: 95.6817% ( 13) 00:09:40.143 12.406 - 12.455: 95.7438% ( 11) 00:09:40.143 12.455 - 12.505: 95.7664% ( 4) 00:09:40.143 12.505 - 12.554: 95.7947% ( 5) 00:09:40.143 12.554 - 12.603: 95.8117% ( 3) 00:09:40.143 12.603 - 12.702: 95.8399% ( 5) 00:09:40.143 12.702 - 12.800: 95.8738% ( 6) 00:09:40.143 12.800 - 12.898: 95.9417% ( 12) 00:09:40.143 12.898 - 12.997: 96.0321% ( 16) 00:09:40.143 12.997 - 13.095: 96.1395% ( 19) 00:09:40.143 13.095 - 13.194: 96.2638% ( 22) 00:09:40.143 13.194 - 13.292: 96.3543% ( 16) 00:09:40.143 13.292 - 13.391: 96.4504% ( 17) 00:09:40.143 13.391 - 13.489: 96.5125% ( 11) 00:09:40.143 13.489 - 13.588: 96.5634% ( 9) 00:09:40.143 13.588 - 13.686: 96.6256% ( 11) 00:09:40.143 13.686 - 13.785: 96.6482% ( 4) 00:09:40.143 13.785 - 13.883: 96.6765% ( 5) 00:09:40.143 13.883 - 13.982: 96.7160% ( 7) 00:09:40.143 13.982 - 14.080: 96.7386% ( 4) 00:09:40.143 14.080 - 14.178: 96.7726% ( 6) 00:09:40.143 14.178 - 14.277: 96.8121% ( 7) 00:09:40.143 14.277 - 14.375: 96.8347% ( 4) 00:09:40.143 14.375 - 14.474: 96.8517% ( 3) 00:09:40.143 14.474 - 14.572: 96.8799% ( 5) 00:09:40.143 14.572 - 14.671: 96.9082% ( 5) 00:09:40.143 14.671 - 14.769: 96.9308% ( 4) 00:09:40.143 14.769 - 14.868: 96.9534% ( 4) 00:09:40.143 14.868 - 14.966: 97.0947% ( 25) 00:09:40.143 14.966 - 15.065: 97.3491% ( 45) 00:09:40.143 15.065 - 15.163: 97.6374% ( 51) 00:09:40.143 15.163 - 15.262: 97.7674% ( 23) 00:09:40.143 15.262 - 15.360: 97.8465% ( 14) 00:09:40.143 15.360 - 15.458: 97.8861% ( 7) 00:09:40.143 15.458 - 15.557: 97.9426% ( 10) 00:09:40.143 15.557 - 15.655: 97.9765% ( 6) 00:09:40.143 15.655 - 15.754: 97.9991% ( 4) 00:09:40.143 15.754 - 15.852: 98.0217% ( 4) 00:09:40.143 15.852 - 15.951: 98.0387% ( 3) 00:09:40.143 15.951 - 16.049: 98.0443% ( 1) 00:09:40.143 16.049 - 16.148: 98.0613% ( 3) 00:09:40.143 16.148 - 16.246: 98.0726% ( 2) 00:09:40.143 16.246 - 16.345: 98.0895% ( 3) 00:09:40.143 16.345 - 16.443: 98.1121% ( 4) 00:09:40.143 16.443 - 16.542: 98.1234% ( 2) 00:09:40.143 16.542 - 16.640: 98.2026% ( 14) 00:09:40.143 16.640 - 16.738: 98.2648% ( 11) 00:09:40.143 16.738 - 16.837: 98.3608% ( 17) 00:09:40.143 16.837 - 16.935: 98.4456% ( 15) 00:09:40.143 16.935 - 17.034: 98.4965% ( 9) 00:09:40.143 17.034 - 17.132: 98.5587% ( 11) 00:09:40.143 17.132 - 17.231: 98.6435% ( 15) 00:09:40.143 17.231 - 17.329: 98.7282% ( 15) 00:09:40.143 17.329 - 17.428: 98.8074% ( 14) 00:09:40.143 17.428 - 17.526: 98.8752% ( 12) 00:09:40.143 17.526 - 17.625: 98.9430% ( 12) 00:09:40.143 17.625 - 17.723: 99.0165% ( 13) 00:09:40.143 17.723 - 17.822: 99.1013% ( 15) 00:09:40.143 17.822 - 17.920: 99.1126% ( 2) 00:09:40.143 17.920 - 18.018: 99.1635% ( 9) 00:09:40.143 18.018 - 18.117: 99.1917% ( 5) 00:09:40.143 18.117 - 18.215: 99.2087% ( 3) 00:09:40.143 18.215 - 18.314: 99.2256% ( 3) 00:09:40.143 18.314 - 18.412: 99.2596% ( 6) 00:09:40.143 18.412 - 18.511: 99.3048% ( 8) 00:09:40.143 18.511 - 18.609: 99.3274% ( 4) 00:09:40.143 18.609 - 18.708: 99.3443% ( 3) 00:09:40.143 18.708 - 18.806: 99.3556% ( 2) 00:09:40.143 18.806 - 18.905: 99.3783% ( 4) 00:09:40.143 18.905 - 19.003: 99.3952% ( 3) 00:09:40.143 19.003 - 19.102: 99.4122% ( 3) 00:09:40.143 19.102 - 19.200: 99.4178% ( 1) 00:09:40.143 19.200 - 19.298: 99.4291% ( 2) 00:09:40.143 19.298 - 19.397: 99.4404% ( 2) 00:09:40.143 19.495 - 19.594: 99.4517% ( 2) 00:09:40.143 19.692 - 19.791: 99.4630% ( 2) 00:09:40.143 19.791 - 19.889: 99.4743% ( 2) 00:09:40.143 19.889 - 19.988: 99.4800% ( 1) 00:09:40.143 20.086 - 20.185: 99.4913% ( 2) 00:09:40.143 20.185 - 20.283: 99.5083% ( 3) 00:09:40.143 20.283 - 20.382: 99.5139% ( 1) 00:09:40.143 20.382 - 20.480: 99.5196% ( 1) 00:09:40.143 20.677 - 20.775: 99.5252% ( 1) 00:09:40.143 20.972 - 21.071: 99.5309% ( 1) 00:09:40.143 21.071 - 21.169: 99.5535% ( 4) 00:09:40.143 21.169 - 21.268: 99.5591% ( 1) 00:09:40.143 21.465 - 21.563: 99.5648% ( 1) 00:09:40.143 21.563 - 21.662: 99.5704% ( 1) 00:09:40.143 21.760 - 21.858: 99.5761% ( 1) 00:09:40.144 21.858 - 21.957: 99.5817% ( 1) 00:09:40.144 21.957 - 22.055: 99.5930% ( 2) 00:09:40.144 22.351 - 22.449: 99.5987% ( 1) 00:09:40.144 22.449 - 22.548: 99.6043% ( 1) 00:09:40.144 23.040 - 23.138: 99.6100% ( 1) 00:09:40.144 23.335 - 23.434: 99.6213% ( 2) 00:09:40.144 23.532 - 23.631: 99.6270% ( 1) 00:09:40.144 23.729 - 23.828: 99.6326% ( 1) 00:09:40.144 23.828 - 23.926: 99.6383% ( 1) 00:09:40.144 24.123 - 24.222: 99.6439% ( 1) 00:09:40.144 24.320 - 24.418: 99.6552% ( 2) 00:09:40.144 24.517 - 24.615: 99.6609% ( 1) 00:09:40.144 25.009 - 25.108: 99.6665% ( 1) 00:09:40.144 26.191 - 26.388: 99.6722% ( 1) 00:09:40.144 27.766 - 27.963: 99.6778% ( 1) 00:09:40.144 27.963 - 28.160: 99.7456% ( 12) 00:09:40.144 28.160 - 28.357: 99.8304% ( 15) 00:09:40.144 28.357 - 28.554: 99.8757% ( 8) 00:09:40.144 28.554 - 28.751: 99.8983% ( 4) 00:09:40.144 28.751 - 28.948: 99.9096% ( 2) 00:09:40.144 28.948 - 29.145: 99.9209% ( 2) 00:09:40.144 29.342 - 29.538: 99.9265% ( 1) 00:09:40.144 29.735 - 29.932: 99.9322% ( 1) 00:09:40.144 31.311 - 31.508: 99.9378% ( 1) 00:09:40.144 32.492 - 32.689: 99.9435% ( 1) 00:09:40.144 33.871 - 34.068: 99.9491% ( 1) 00:09:40.144 34.068 - 34.265: 99.9548% ( 1) 00:09:40.144 36.234 - 36.431: 99.9604% ( 1) 00:09:40.144 38.991 - 39.188: 99.9661% ( 1) 00:09:40.144 47.262 - 47.458: 99.9717% ( 1) 00:09:40.144 60.652 - 61.046: 99.9774% ( 1) 00:09:40.144 62.622 - 63.015: 99.9830% ( 1) 00:09:40.144 85.858 - 86.252: 99.9887% ( 1) 00:09:40.144 93.342 - 93.735: 99.9943% ( 1) 00:09:40.144 296.172 - 297.748: 100.0000% ( 1) 00:09:40.144 00:09:40.144 Complete histogram 00:09:40.144 ================== 00:09:40.144 Range in us Cumulative Count 00:09:40.144 7.188 - 7.237: 0.2204% ( 39) 00:09:40.144 7.237 - 7.286: 3.0805% ( 506) 00:09:40.144 7.286 - 7.335: 11.5702% ( 1502) 00:09:40.144 7.335 - 7.385: 29.0357% ( 3090) 00:09:40.144 7.385 - 7.434: 51.9783% ( 4059) 00:09:40.144 7.434 - 7.483: 71.4221% ( 3440) 00:09:40.144 7.483 - 7.532: 83.5688% ( 2149) 00:09:40.144 7.532 - 7.582: 89.7637% ( 1096) 00:09:40.144 7.582 - 7.631: 92.8781% ( 551) 00:09:40.144 7.631 - 7.680: 94.1895% ( 232) 00:09:40.144 7.680 - 7.729: 94.8169% ( 111) 00:09:40.144 7.729 - 7.778: 95.0656% ( 44) 00:09:40.144 7.778 - 7.828: 95.1560% ( 16) 00:09:40.144 7.828 - 7.877: 95.2408% ( 15) 00:09:40.144 7.877 - 7.926: 95.2804% ( 7) 00:09:40.144 7.926 - 7.975: 95.2973% ( 3) 00:09:40.144 7.975 - 8.025: 95.3086% ( 2) 00:09:40.144 8.025 - 8.074: 95.3256% ( 3) 00:09:40.144 8.074 - 8.123: 95.3821% ( 10) 00:09:40.144 8.123 - 8.172: 95.4951% ( 20) 00:09:40.144 8.172 - 8.222: 95.6817% ( 33) 00:09:40.144 8.222 - 8.271: 95.8399% ( 28) 00:09:40.144 8.271 - 8.320: 96.0265% ( 33) 00:09:40.144 8.320 - 8.369: 96.3486% ( 57) 00:09:40.144 8.369 - 8.418: 96.5578% ( 37) 00:09:40.144 8.418 - 8.468: 96.7217% ( 29) 00:09:40.144 8.468 - 8.517: 96.8913% ( 30) 00:09:40.144 8.517 - 8.566: 96.9308% ( 7) 00:09:40.144 8.566 - 8.615: 96.9591% ( 5) 00:09:40.144 8.615 - 8.665: 96.9704% ( 2) 00:09:40.144 8.665 - 8.714: 96.9930% ( 4) 00:09:40.144 8.911 - 8.960: 96.9986% ( 1) 00:09:40.144 9.157 - 9.206: 97.0043% ( 1) 00:09:40.144 9.502 - 9.551: 97.0099% ( 1) 00:09:40.144 9.551 - 9.600: 97.0156% ( 1) 00:09:40.144 9.600 - 9.649: 97.0213% ( 1) 00:09:40.144 9.698 - 9.748: 97.0269% ( 1) 00:09:40.144 9.797 - 9.846: 97.0326% ( 1) 00:09:40.144 9.846 - 9.895: 97.0439% ( 2) 00:09:40.144 9.945 - 9.994: 97.0552% ( 2) 00:09:40.144 10.043 - 10.092: 97.0608% ( 1) 00:09:40.144 10.338 - 10.388: 97.0778% ( 3) 00:09:40.144 10.388 - 10.437: 97.1965% ( 21) 00:09:40.144 10.437 - 10.486: 97.3943% ( 35) 00:09:40.144 10.486 - 10.535: 97.5978% ( 36) 00:09:40.144 10.535 - 10.585: 97.7391% ( 25) 00:09:40.144 10.585 - 10.634: 97.8634% ( 22) 00:09:40.144 10.634 - 10.683: 97.8917% ( 5) 00:09:40.144 10.683 - 10.732: 97.9087% ( 3) 00:09:40.144 10.732 - 10.782: 97.9143% ( 1) 00:09:40.144 10.831 - 10.880: 97.9369% ( 4) 00:09:40.144 10.880 - 10.929: 97.9426% ( 1) 00:09:40.144 10.978 - 11.028: 97.9482% ( 1) 00:09:40.144 11.028 - 11.077: 97.9539% ( 1) 00:09:40.144 11.175 - 11.225: 97.9652% ( 2) 00:09:40.144 11.323 - 11.372: 97.9708% ( 1) 00:09:40.144 11.422 - 11.471: 97.9765% ( 1) 00:09:40.144 11.569 - 11.618: 97.9991% ( 4) 00:09:40.144 11.914 - 11.963: 98.0161% ( 3) 00:09:40.144 12.062 - 12.111: 98.0274% ( 2) 00:09:40.144 12.160 - 12.209: 98.0387% ( 2) 00:09:40.144 12.209 - 12.258: 98.0443% ( 1) 00:09:40.144 12.406 - 12.455: 98.0500% ( 1) 00:09:40.144 12.455 - 12.505: 98.0556% ( 1) 00:09:40.144 12.505 - 12.554: 98.0613% ( 1) 00:09:40.144 12.702 - 12.800: 98.0726% ( 2) 00:09:40.144 12.800 - 12.898: 98.1517% ( 14) 00:09:40.144 12.898 - 12.997: 98.1856% ( 6) 00:09:40.144 12.997 - 13.095: 98.2817% ( 17) 00:09:40.144 13.095 - 13.194: 98.3778% ( 17) 00:09:40.144 13.194 - 13.292: 98.4456% ( 12) 00:09:40.144 13.292 - 13.391: 98.5643% ( 21) 00:09:40.144 13.391 - 13.489: 98.6378% ( 13) 00:09:40.144 13.489 - 13.588: 98.7169% ( 14) 00:09:40.144 13.588 - 13.686: 98.8413% ( 22) 00:09:40.144 13.686 - 13.785: 98.9261% ( 15) 00:09:40.144 13.785 - 13.883: 98.9939% ( 12) 00:09:40.144 13.883 - 13.982: 99.0561% ( 11) 00:09:40.144 13.982 - 14.080: 99.0730% ( 3) 00:09:40.144 14.080 - 14.178: 99.1239% ( 9) 00:09:40.144 14.178 - 14.277: 99.1578% ( 6) 00:09:40.144 14.277 - 14.375: 99.1861% ( 5) 00:09:40.144 14.375 - 14.474: 99.2313% ( 8) 00:09:40.144 14.474 - 14.572: 99.2539% ( 4) 00:09:40.144 14.572 - 14.671: 99.2822% ( 5) 00:09:40.144 14.671 - 14.769: 99.3330% ( 9) 00:09:40.144 14.769 - 14.868: 99.3613% ( 5) 00:09:40.144 14.868 - 14.966: 99.3669% ( 1) 00:09:40.144 14.966 - 15.065: 99.3726% ( 1) 00:09:40.144 15.065 - 15.163: 99.4122% ( 7) 00:09:40.144 15.163 - 15.262: 99.4461% ( 6) 00:09:40.144 15.262 - 15.360: 99.4517% ( 1) 00:09:40.144 15.458 - 15.557: 99.4630% ( 2) 00:09:40.144 15.655 - 15.754: 99.4800% ( 3) 00:09:40.144 15.754 - 15.852: 99.4913% ( 2) 00:09:40.144 15.852 - 15.951: 99.5026% ( 2) 00:09:40.144 15.951 - 16.049: 99.5083% ( 1) 00:09:40.144 16.148 - 16.246: 99.5139% ( 1) 00:09:40.144 16.345 - 16.443: 99.5196% ( 1) 00:09:40.144 16.443 - 16.542: 99.5309% ( 2) 00:09:40.144 16.542 - 16.640: 99.5365% ( 1) 00:09:40.144 16.640 - 16.738: 99.5422% ( 1) 00:09:40.144 16.837 - 16.935: 99.5535% ( 2) 00:09:40.144 16.935 - 17.034: 99.5591% ( 1) 00:09:40.144 17.034 - 17.132: 99.5704% ( 2) 00:09:40.144 17.132 - 17.231: 99.5817% ( 2) 00:09:40.144 17.723 - 17.822: 99.5874% ( 1) 00:09:40.144 17.822 - 17.920: 99.5930% ( 1) 00:09:40.144 17.920 - 18.018: 99.5987% ( 1) 00:09:40.144 18.117 - 18.215: 99.6043% ( 1) 00:09:40.144 18.412 - 18.511: 99.6100% ( 1) 00:09:40.144 18.609 - 18.708: 99.6156% ( 1) 00:09:40.144 18.806 - 18.905: 99.6213% ( 1) 00:09:40.144 19.200 - 19.298: 99.6270% ( 1) 00:09:40.144 19.692 - 19.791: 99.6383% ( 2) 00:09:40.144 19.889 - 19.988: 99.6439% ( 1) 00:09:40.144 20.086 - 20.185: 99.6552% ( 2) 00:09:40.144 20.185 - 20.283: 99.7230% ( 12) 00:09:40.144 20.283 - 20.382: 99.7796% ( 10) 00:09:40.144 20.382 - 20.480: 99.8022% ( 4) 00:09:40.144 20.480 - 20.578: 99.8361% ( 6) 00:09:40.144 20.578 - 20.677: 99.8530% ( 3) 00:09:40.144 20.677 - 20.775: 99.8643% ( 2) 00:09:40.144 20.775 - 20.874: 99.8757% ( 2) 00:09:40.144 20.972 - 21.071: 99.8813% ( 1) 00:09:40.144 21.071 - 21.169: 99.8870% ( 1) 00:09:40.144 21.858 - 21.957: 99.9039% ( 3) 00:09:40.144 22.154 - 22.252: 99.9096% ( 1) 00:09:40.144 22.745 - 22.843: 99.9152% ( 1) 00:09:40.144 23.237 - 23.335: 99.9209% ( 1) 00:09:40.144 23.434 - 23.532: 99.9322% ( 2) 00:09:40.144 23.532 - 23.631: 99.9378% ( 1) 00:09:40.144 25.403 - 25.600: 99.9435% ( 1) 00:09:40.144 30.720 - 30.917: 99.9491% ( 1) 00:09:40.144 31.705 - 31.902: 99.9548% ( 1) 00:09:40.144 36.431 - 36.628: 99.9604% ( 1) 00:09:40.144 38.991 - 39.188: 99.9661% ( 1) 00:09:40.144 57.895 - 58.289: 99.9774% ( 2) 00:09:40.144 58.289 - 58.683: 99.9830% ( 1) 00:09:40.144 62.228 - 62.622: 99.9887% ( 1) 00:09:40.144 64.591 - 64.985: 99.9943% ( 1) 00:09:40.144 67.348 - 67.742: 100.0000% ( 1) 00:09:40.144 00:09:40.144 ************************************ 00:09:40.144 END TEST nvme_overhead 00:09:40.144 ************************************ 00:09:40.144 00:09:40.144 real 0m1.221s 00:09:40.144 user 0m1.078s 00:09:40.144 sys 0m0.092s 00:09:40.144 20:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:40.144 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:09:40.144 20:02:47 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:40.144 20:02:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:40.144 20:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:40.144 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:09:40.144 ************************************ 00:09:40.144 START TEST nvme_arbitration 00:09:40.144 ************************************ 00:09:40.145 20:02:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:43.426 Initializing NVMe Controllers 00:09:43.426 Attached to 0000:00:09.0 00:09:43.426 Attached to 0000:00:06.0 00:09:43.426 Attached to 0000:00:07.0 00:09:43.426 Attached to 0000:00:08.0 00:09:43.426 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:43.426 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:43.426 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:43.426 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:43.426 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:43.426 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:43.426 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:43.426 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:43.426 Initialization complete. Launching workers. 00:09:43.426 Starting thread on core 1 with urgent priority queue 00:09:43.426 Starting thread on core 2 with urgent priority queue 00:09:43.426 Starting thread on core 3 with urgent priority queue 00:09:43.426 Starting thread on core 0 with urgent priority queue 00:09:43.426 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:43.426 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:43.426 QEMU NVMe Ctrl (12340 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:43.426 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:43.426 QEMU NVMe Ctrl (12341 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:09:43.426 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:09:43.426 ======================================================== 00:09:43.426 00:09:43.426 00:09:43.426 real 0m3.402s 00:09:43.426 user 0m9.538s 00:09:43.426 sys 0m0.106s 00:09:43.426 ************************************ 00:09:43.426 END TEST nvme_arbitration 00:09:43.426 ************************************ 00:09:43.426 20:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.426 20:02:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.426 20:02:50 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:43.426 20:02:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:43.426 20:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.426 20:02:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.426 ************************************ 00:09:43.426 START TEST nvme_single_aen 00:09:43.426 ************************************ 00:09:43.426 20:02:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:43.426 [2024-12-16 20:02:50.983647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:43.426 [2024-12-16 20:02:50.983822] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.684 [2024-12-16 20:02:51.126010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:43.684 [2024-12-16 20:02:51.128562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:43.684 [2024-12-16 20:02:51.130603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:43.684 [2024-12-16 20:02:51.132468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:43.684 Asynchronous Event Request test 00:09:43.684 Attached to 0000:00:09.0 00:09:43.684 Attached to 0000:00:06.0 00:09:43.684 Attached to 0000:00:07.0 00:09:43.684 Attached to 0000:00:08.0 00:09:43.684 Reset controller to setup AER completions for this process 00:09:43.684 Registering asynchronous event callbacks... 00:09:43.685 Getting orig temperature thresholds of all controllers 00:09:43.685 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.685 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.685 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.685 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.685 Setting all controllers temperature threshold low to trigger AER 00:09:43.685 Waiting for all controllers temperature threshold to be set lower 00:09:43.685 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.685 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:43.685 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.685 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:43.685 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.685 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:43.685 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.685 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:43.685 Waiting for all controllers to trigger AER and reset threshold 00:09:43.685 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.685 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.685 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.685 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.685 Cleaning up... 00:09:43.685 00:09:43.685 real 0m0.216s 00:09:43.685 user 0m0.077s 00:09:43.685 sys 0m0.092s 00:09:43.685 20:02:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.685 20:02:51 -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 ************************************ 00:09:43.685 END TEST nvme_single_aen 00:09:43.685 ************************************ 00:09:43.685 20:02:51 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:43.685 20:02:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:43.685 20:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.685 20:02:51 -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 ************************************ 00:09:43.685 START TEST nvme_doorbell_aers 00:09:43.685 ************************************ 00:09:43.685 20:02:51 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:43.685 20:02:51 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:43.685 20:02:51 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:43.685 20:02:51 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:43.685 20:02:51 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:43.685 20:02:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:43.685 20:02:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:43.685 20:02:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:43.685 20:02:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:43.685 20:02:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:43.685 20:02:51 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:43.685 20:02:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:43.685 20:02:51 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:43.685 20:02:51 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:43.943 [2024-12-16 20:02:51.472109] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:09:53.910 Executing: test_write_invalid_db 00:09:53.910 Waiting for AER completion... 00:09:53.910 Failure: test_write_invalid_db 00:09:53.910 00:09:53.910 Executing: test_invalid_db_write_overflow_sq 00:09:53.910 Waiting for AER completion... 00:09:53.910 Failure: test_invalid_db_write_overflow_sq 00:09:53.910 00:09:53.910 Executing: test_invalid_db_write_overflow_cq 00:09:53.910 Waiting for AER completion... 00:09:53.910 Failure: test_invalid_db_write_overflow_cq 00:09:53.910 00:09:53.910 20:03:01 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:53.910 20:03:01 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:53.910 [2024-12-16 20:03:01.503868] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:03.893 Executing: test_write_invalid_db 00:10:03.893 Waiting for AER completion... 00:10:03.893 Failure: test_write_invalid_db 00:10:03.893 00:10:03.893 Executing: test_invalid_db_write_overflow_sq 00:10:03.893 Waiting for AER completion... 00:10:03.893 Failure: test_invalid_db_write_overflow_sq 00:10:03.893 00:10:03.893 Executing: test_invalid_db_write_overflow_cq 00:10:03.893 Waiting for AER completion... 00:10:03.893 Failure: test_invalid_db_write_overflow_cq 00:10:03.893 00:10:03.893 20:03:11 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:03.893 20:03:11 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:04.154 [2024-12-16 20:03:11.542927] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:14.150 Executing: test_write_invalid_db 00:10:14.150 Waiting for AER completion... 00:10:14.150 Failure: test_write_invalid_db 00:10:14.150 00:10:14.150 Executing: test_invalid_db_write_overflow_sq 00:10:14.150 Waiting for AER completion... 00:10:14.150 Failure: test_invalid_db_write_overflow_sq 00:10:14.150 00:10:14.150 Executing: test_invalid_db_write_overflow_cq 00:10:14.150 Waiting for AER completion... 00:10:14.150 Failure: test_invalid_db_write_overflow_cq 00:10:14.150 00:10:14.150 20:03:21 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:14.150 20:03:21 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:14.150 [2024-12-16 20:03:21.573023] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 Executing: test_write_invalid_db 00:10:24.121 Waiting for AER completion... 00:10:24.121 Failure: test_write_invalid_db 00:10:24.121 00:10:24.121 Executing: test_invalid_db_write_overflow_sq 00:10:24.121 Waiting for AER completion... 00:10:24.121 Failure: test_invalid_db_write_overflow_sq 00:10:24.121 00:10:24.121 Executing: test_invalid_db_write_overflow_cq 00:10:24.121 Waiting for AER completion... 00:10:24.121 Failure: test_invalid_db_write_overflow_cq 00:10:24.121 00:10:24.121 00:10:24.121 real 0m40.185s 00:10:24.121 user 0m34.218s 00:10:24.121 sys 0m5.607s 00:10:24.121 20:03:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.121 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 ************************************ 00:10:24.121 END TEST nvme_doorbell_aers 00:10:24.121 ************************************ 00:10:24.121 20:03:31 -- nvme/nvme.sh@97 -- # uname 00:10:24.121 20:03:31 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:24.121 20:03:31 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:24.121 20:03:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:24.121 20:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.121 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 ************************************ 00:10:24.121 START TEST nvme_multi_aen 00:10:24.121 ************************************ 00:10:24.121 20:03:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:24.121 [2024-12-16 20:03:31.484696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:24.121 [2024-12-16 20:03:31.484887] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.121 [2024-12-16 20:03:31.615187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:24.121 [2024-12-16 20:03:31.615320] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.615403] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.615432] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.617022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:24.121 [2024-12-16 20:03:31.617112] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.617189] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.617217] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.618150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:24.121 [2024-12-16 20:03:31.618216] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.618276] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.618311] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.619219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:24.121 [2024-12-16 20:03:31.619280] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.121 [2024-12-16 20:03:31.619351] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.122 [2024-12-16 20:03:31.619386] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63830) is not found. Dropping the request. 00:10:24.122 Child process pid: 64357 00:10:24.122 [2024-12-16 20:03:31.629036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:24.122 [2024-12-16 20:03:31.629204] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.380 [Child] Asynchronous Event Request test 00:10:24.380 [Child] Attached to 0000:00:09.0 00:10:24.380 [Child] Attached to 0000:00:06.0 00:10:24.380 [Child] Attached to 0000:00:07.0 00:10:24.380 [Child] Attached to 0000:00:08.0 00:10:24.380 [Child] Registering asynchronous event callbacks... 00:10:24.380 [Child] Getting orig temperature thresholds of all controllers 00:10:24.380 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:24.380 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 [Child] Cleaning up... 00:10:24.380 Asynchronous Event Request test 00:10:24.380 Attached to 0000:00:09.0 00:10:24.380 Attached to 0000:00:06.0 00:10:24.380 Attached to 0000:00:07.0 00:10:24.380 Attached to 0000:00:08.0 00:10:24.380 Reset controller to setup AER completions for this process 00:10:24.380 Registering asynchronous event callbacks... 00:10:24.380 Getting orig temperature thresholds of all controllers 00:10:24.380 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:24.380 Setting all controllers temperature threshold low to trigger AER 00:10:24.380 Waiting for all controllers temperature threshold to be set lower 00:10:24.380 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:24.380 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:24.380 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:24.380 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:24.380 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:24.380 Waiting for all controllers to trigger AER and reset threshold 00:10:24.380 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:24.380 Cleaning up... 00:10:24.380 00:10:24.380 real 0m0.400s 00:10:24.380 user 0m0.119s 00:10:24.380 sys 0m0.181s 00:10:24.380 20:03:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.380 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 ************************************ 00:10:24.380 END TEST nvme_multi_aen 00:10:24.380 ************************************ 00:10:24.380 20:03:31 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:24.380 20:03:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:24.380 20:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.380 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 ************************************ 00:10:24.380 START TEST nvme_startup 00:10:24.380 ************************************ 00:10:24.380 20:03:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:24.641 Initializing NVMe Controllers 00:10:24.641 Attached to 0000:00:09.0 00:10:24.641 Attached to 0000:00:06.0 00:10:24.641 Attached to 0000:00:07.0 00:10:24.641 Attached to 0000:00:08.0 00:10:24.641 Initialization complete. 00:10:24.641 Time used:142160.172 (us). 00:10:24.641 00:10:24.641 real 0m0.201s 00:10:24.641 user 0m0.064s 00:10:24.641 sys 0m0.085s 00:10:24.641 20:03:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.641 20:03:32 -- common/autotest_common.sh@10 -- # set +x 00:10:24.641 ************************************ 00:10:24.641 END TEST nvme_startup 00:10:24.641 ************************************ 00:10:24.641 20:03:32 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:24.641 20:03:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:24.641 20:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.641 20:03:32 -- common/autotest_common.sh@10 -- # set +x 00:10:24.641 ************************************ 00:10:24.641 START TEST nvme_multi_secondary 00:10:24.641 ************************************ 00:10:24.641 20:03:32 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:24.641 20:03:32 -- nvme/nvme.sh@52 -- # pid0=64402 00:10:24.641 20:03:32 -- nvme/nvme.sh@54 -- # pid1=64403 00:10:24.641 20:03:32 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:24.641 20:03:32 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:24.641 20:03:32 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:27.998 Initializing NVMe Controllers 00:10:27.998 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:27.998 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:27.998 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:27.998 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:27.998 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:27.998 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:27.998 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:27.998 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:27.998 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:27.998 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:27.998 Initialization complete. Launching workers. 00:10:27.998 ======================================================== 00:10:27.998 Latency(us) 00:10:27.998 Device Information : IOPS MiB/s Average min max 00:10:27.998 PCIE (0000:00:09.0) NSID 1 from core 1: 7245.67 28.30 2207.81 832.97 7513.72 00:10:27.998 PCIE (0000:00:06.0) NSID 1 from core 1: 7245.67 28.30 2207.32 804.02 7674.56 00:10:27.998 PCIE (0000:00:07.0) NSID 1 from core 1: 7245.67 28.30 2208.32 810.17 7291.23 00:10:27.998 PCIE (0000:00:08.0) NSID 1 from core 1: 7245.67 28.30 2208.60 835.59 7321.45 00:10:27.998 PCIE (0000:00:08.0) NSID 2 from core 1: 7245.67 28.30 2208.73 827.98 7426.00 00:10:27.998 PCIE (0000:00:08.0) NSID 3 from core 1: 7245.67 28.30 2209.23 808.16 7471.30 00:10:27.998 ======================================================== 00:10:27.998 Total : 43474.05 169.82 2208.33 804.02 7674.56 00:10:27.998 00:10:28.259 Initializing NVMe Controllers 00:10:28.259 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:28.259 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:28.259 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:28.259 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:28.259 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:28.259 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:28.259 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:28.259 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:28.259 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:28.259 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:28.259 Initialization complete. Launching workers. 00:10:28.259 ======================================================== 00:10:28.259 Latency(us) 00:10:28.259 Device Information : IOPS MiB/s Average min max 00:10:28.259 PCIE (0000:00:09.0) NSID 1 from core 2: 3023.78 11.81 5291.00 1104.03 12781.82 00:10:28.259 PCIE (0000:00:06.0) NSID 1 from core 2: 3023.78 11.81 5289.77 1092.19 13677.42 00:10:28.259 PCIE (0000:00:07.0) NSID 1 from core 2: 3023.78 11.81 5290.51 1111.51 13949.45 00:10:28.259 PCIE (0000:00:08.0) NSID 1 from core 2: 3023.78 11.81 5288.04 1097.45 13592.70 00:10:28.259 PCIE (0000:00:08.0) NSID 2 from core 2: 3023.78 11.81 5287.15 1114.04 13290.19 00:10:28.259 PCIE (0000:00:08.0) NSID 3 from core 2: 3023.78 11.81 5287.18 1118.62 12667.50 00:10:28.259 ======================================================== 00:10:28.259 Total : 18142.67 70.87 5288.94 1092.19 13949.45 00:10:28.259 00:10:28.259 20:03:35 -- nvme/nvme.sh@56 -- # wait 64402 00:10:30.173 Initializing NVMe Controllers 00:10:30.173 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:30.174 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:30.174 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:30.174 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:30.174 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:30.174 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:30.174 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:30.174 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:30.174 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:30.174 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:30.174 Initialization complete. Launching workers. 00:10:30.174 ======================================================== 00:10:30.174 Latency(us) 00:10:30.174 Device Information : IOPS MiB/s Average min max 00:10:30.174 PCIE (0000:00:09.0) NSID 1 from core 0: 10613.68 41.46 1507.13 745.32 7044.05 00:10:30.174 PCIE (0000:00:06.0) NSID 1 from core 0: 10613.68 41.46 1506.30 731.12 7766.00 00:10:30.174 PCIE (0000:00:07.0) NSID 1 from core 0: 10613.68 41.46 1507.08 749.93 8228.68 00:10:30.174 PCIE (0000:00:08.0) NSID 1 from core 0: 10613.68 41.46 1507.05 683.39 7747.15 00:10:30.174 PCIE (0000:00:08.0) NSID 2 from core 0: 10613.68 41.46 1507.03 653.15 7267.35 00:10:30.174 PCIE (0000:00:08.0) NSID 3 from core 0: 10613.68 41.46 1507.01 622.39 7077.49 00:10:30.174 ======================================================== 00:10:30.174 Total : 63682.07 248.76 1506.93 622.39 8228.68 00:10:30.174 00:10:30.174 20:03:37 -- nvme/nvme.sh@57 -- # wait 64403 00:10:30.174 20:03:37 -- nvme/nvme.sh@61 -- # pid0=64472 00:10:30.174 20:03:37 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:30.174 20:03:37 -- nvme/nvme.sh@63 -- # pid1=64473 00:10:30.174 20:03:37 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:30.174 20:03:37 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:33.462 Initializing NVMe Controllers 00:10:33.462 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:33.462 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:33.462 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:33.462 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:33.462 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:33.462 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:33.462 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:33.462 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:33.462 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:33.462 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:33.462 Initialization complete. Launching workers. 00:10:33.462 ======================================================== 00:10:33.462 Latency(us) 00:10:33.462 Device Information : IOPS MiB/s Average min max 00:10:33.462 PCIE (0000:00:09.0) NSID 1 from core 0: 6933.33 27.08 2307.32 769.09 9640.21 00:10:33.462 PCIE (0000:00:06.0) NSID 1 from core 0: 6933.33 27.08 2307.05 732.81 8832.30 00:10:33.462 PCIE (0000:00:07.0) NSID 1 from core 0: 6933.33 27.08 2308.43 758.85 9586.87 00:10:33.462 PCIE (0000:00:08.0) NSID 1 from core 0: 6933.33 27.08 2309.04 754.61 8158.42 00:10:33.462 PCIE (0000:00:08.0) NSID 2 from core 0: 6933.33 27.08 2309.65 770.36 10195.65 00:10:33.462 PCIE (0000:00:08.0) NSID 3 from core 0: 6933.33 27.08 2309.83 766.59 9596.78 00:10:33.462 ======================================================== 00:10:33.462 Total : 41600.01 162.50 2308.55 732.81 10195.65 00:10:33.462 00:10:33.462 Initializing NVMe Controllers 00:10:33.462 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:33.462 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:33.462 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:33.462 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:33.462 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:33.462 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:33.462 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:33.462 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:33.462 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:33.462 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:33.462 Initialization complete. Launching workers. 00:10:33.462 ======================================================== 00:10:33.462 Latency(us) 00:10:33.462 Device Information : IOPS MiB/s Average min max 00:10:33.462 PCIE (0000:00:09.0) NSID 1 from core 1: 7065.30 27.60 2264.16 1007.34 10209.06 00:10:33.462 PCIE (0000:00:06.0) NSID 1 from core 1: 7065.30 27.60 2263.38 975.37 10862.27 00:10:33.462 PCIE (0000:00:07.0) NSID 1 from core 1: 7065.30 27.60 2264.24 1015.71 11327.00 00:10:33.462 PCIE (0000:00:08.0) NSID 1 from core 1: 7065.30 27.60 2264.19 1024.26 11457.83 00:10:33.462 PCIE (0000:00:08.0) NSID 2 from core 1: 7065.30 27.60 2264.14 1008.98 11346.45 00:10:33.462 PCIE (0000:00:08.0) NSID 3 from core 1: 7065.30 27.60 2264.10 1007.64 9260.60 00:10:33.462 ======================================================== 00:10:33.462 Total : 42391.82 165.59 2264.04 975.37 11457.83 00:10:33.462 00:10:36.006 Initializing NVMe Controllers 00:10:36.006 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:36.006 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:36.006 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:36.006 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:36.006 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:36.006 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:36.007 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:36.007 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:36.007 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:36.007 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:36.007 Initialization complete. Launching workers. 00:10:36.007 ======================================================== 00:10:36.007 Latency(us) 00:10:36.007 Device Information : IOPS MiB/s Average min max 00:10:36.007 PCIE (0000:00:09.0) NSID 1 from core 2: 3739.95 14.61 4277.76 878.07 14226.20 00:10:36.007 PCIE (0000:00:06.0) NSID 1 from core 2: 3739.95 14.61 4276.71 865.07 16242.37 00:10:36.007 PCIE (0000:00:07.0) NSID 1 from core 2: 3739.95 14.61 4277.31 778.53 16242.25 00:10:36.007 PCIE (0000:00:08.0) NSID 1 from core 2: 3739.95 14.61 4280.88 861.52 16482.11 00:10:36.007 PCIE (0000:00:08.0) NSID 2 from core 2: 3739.95 14.61 4280.81 866.98 13780.87 00:10:36.007 PCIE (0000:00:08.0) NSID 3 from core 2: 3739.95 14.61 4280.96 879.32 14106.14 00:10:36.007 ======================================================== 00:10:36.007 Total : 22439.70 87.66 4279.07 778.53 16482.11 00:10:36.007 00:10:36.007 20:03:43 -- nvme/nvme.sh@65 -- # wait 64472 00:10:36.007 20:03:43 -- nvme/nvme.sh@66 -- # wait 64473 00:10:36.007 00:10:36.007 real 0m10.929s 00:10:36.007 user 0m18.742s 00:10:36.007 sys 0m0.606s 00:10:36.007 ************************************ 00:10:36.007 END TEST nvme_multi_secondary 00:10:36.007 ************************************ 00:10:36.007 20:03:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:36.007 20:03:43 -- common/autotest_common.sh@10 -- # set +x 00:10:36.007 20:03:43 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:36.007 20:03:43 -- nvme/nvme.sh@102 -- # kill_stub 00:10:36.007 20:03:43 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63418 ]] 00:10:36.007 20:03:43 -- common/autotest_common.sh@1076 -- # kill 63418 00:10:36.007 20:03:43 -- common/autotest_common.sh@1077 -- # wait 63418 00:10:36.268 [2024-12-16 20:03:43.846478] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.846572] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.846585] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.846597] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.868464] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.868535] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.868547] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:36.268 [2024-12-16 20:03:43.868559] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:37.661 [2024-12-16 20:03:44.870091] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:37.661 [2024-12-16 20:03:44.870398] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:37.661 [2024-12-16 20:03:44.870421] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:37.661 [2024-12-16 20:03:44.870435] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:39.047 [2024-12-16 20:03:46.377777] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:39.047 [2024-12-16 20:03:46.378051] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:39.047 [2024-12-16 20:03:46.378071] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:39.047 [2024-12-16 20:03:46.378087] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64350) is not found. Dropping the request. 00:10:39.047 20:03:46 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:39.047 20:03:46 -- common/autotest_common.sh@1083 -- # echo 2 00:10:39.047 20:03:46 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:39.047 20:03:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:39.047 20:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.047 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 ************************************ 00:10:39.047 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:39.047 ************************************ 00:10:39.047 20:03:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:39.047 * Looking for test storage... 00:10:39.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:39.047 20:03:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:39.047 20:03:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:39.047 20:03:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:39.308 20:03:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:39.308 20:03:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:39.308 20:03:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:39.308 20:03:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:39.308 20:03:46 -- scripts/common.sh@335 -- # IFS=.-: 00:10:39.308 20:03:46 -- scripts/common.sh@335 -- # read -ra ver1 00:10:39.308 20:03:46 -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.308 20:03:46 -- scripts/common.sh@336 -- # read -ra ver2 00:10:39.308 20:03:46 -- scripts/common.sh@337 -- # local 'op=<' 00:10:39.308 20:03:46 -- scripts/common.sh@339 -- # ver1_l=2 00:10:39.308 20:03:46 -- scripts/common.sh@340 -- # ver2_l=1 00:10:39.308 20:03:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:39.308 20:03:46 -- scripts/common.sh@343 -- # case "$op" in 00:10:39.308 20:03:46 -- scripts/common.sh@344 -- # : 1 00:10:39.308 20:03:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:39.308 20:03:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.308 20:03:46 -- scripts/common.sh@364 -- # decimal 1 00:10:39.308 20:03:46 -- scripts/common.sh@352 -- # local d=1 00:10:39.308 20:03:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.308 20:03:46 -- scripts/common.sh@354 -- # echo 1 00:10:39.308 20:03:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:39.308 20:03:46 -- scripts/common.sh@365 -- # decimal 2 00:10:39.308 20:03:46 -- scripts/common.sh@352 -- # local d=2 00:10:39.308 20:03:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.308 20:03:46 -- scripts/common.sh@354 -- # echo 2 00:10:39.308 20:03:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:39.308 20:03:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:39.308 20:03:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:39.308 20:03:46 -- scripts/common.sh@367 -- # return 0 00:10:39.308 20:03:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.308 20:03:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.308 --rc genhtml_branch_coverage=1 00:10:39.308 --rc genhtml_function_coverage=1 00:10:39.308 --rc genhtml_legend=1 00:10:39.308 --rc geninfo_all_blocks=1 00:10:39.308 --rc geninfo_unexecuted_blocks=1 00:10:39.308 00:10:39.308 ' 00:10:39.308 20:03:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.308 --rc genhtml_branch_coverage=1 00:10:39.308 --rc genhtml_function_coverage=1 00:10:39.308 --rc genhtml_legend=1 00:10:39.308 --rc geninfo_all_blocks=1 00:10:39.308 --rc geninfo_unexecuted_blocks=1 00:10:39.308 00:10:39.308 ' 00:10:39.308 20:03:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.308 --rc genhtml_branch_coverage=1 00:10:39.308 --rc genhtml_function_coverage=1 00:10:39.308 --rc genhtml_legend=1 00:10:39.308 --rc geninfo_all_blocks=1 00:10:39.308 --rc geninfo_unexecuted_blocks=1 00:10:39.308 00:10:39.308 ' 00:10:39.308 20:03:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.308 --rc genhtml_branch_coverage=1 00:10:39.308 --rc genhtml_function_coverage=1 00:10:39.308 --rc genhtml_legend=1 00:10:39.308 --rc geninfo_all_blocks=1 00:10:39.308 --rc geninfo_unexecuted_blocks=1 00:10:39.308 00:10:39.308 ' 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:39.308 20:03:46 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:39.308 20:03:46 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:39.308 20:03:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:39.308 20:03:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:39.308 20:03:46 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:39.308 20:03:46 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:39.308 20:03:46 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:39.308 20:03:46 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:39.308 20:03:46 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:39.308 20:03:46 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:39.308 20:03:46 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:39.308 20:03:46 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:39.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64674 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64674 00:10:39.308 20:03:46 -- common/autotest_common.sh@829 -- # '[' -z 64674 ']' 00:10:39.308 20:03:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.308 20:03:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.308 20:03:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.308 20:03:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:39.309 20:03:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.309 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.309 [2024-12-16 20:03:46.900259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:39.309 [2024-12-16 20:03:46.900635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64674 ] 00:10:39.570 [2024-12-16 20:03:47.064827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.831 [2024-12-16 20:03:47.287760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:39.831 [2024-12-16 20:03:47.288224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.831 [2024-12-16 20:03:47.288498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.831 [2024-12-16 20:03:47.288969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.831 [2024-12-16 20:03:47.288988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.773 20:03:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.773 20:03:48 -- common/autotest_common.sh@862 -- # return 0 00:10:40.773 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:40.773 20:03:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.773 20:03:48 -- common/autotest_common.sh@10 -- # set +x 00:10:41.033 nvme0n1 00:10:41.033 20:03:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_7XIuh.txt 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:41.033 20:03:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.033 20:03:48 -- common/autotest_common.sh@10 -- # set +x 00:10:41.033 true 00:10:41.033 20:03:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734379428 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64704 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:41.033 20:03:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:42.943 20:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.943 20:03:50 -- common/autotest_common.sh@10 -- # set +x 00:10:42.943 [2024-12-16 20:03:50.489634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:42.943 [2024-12-16 20:03:50.489844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:42.943 [2024-12-16 20:03:50.489863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.943 [2024-12-16 20:03:50.489873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.943 [2024-12-16 20:03:50.491315] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:42.943 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64704 00:10:42.943 20:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64704 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64704 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:42.943 20:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.943 20:03:50 -- common/autotest_common.sh@10 -- # set +x 00:10:42.943 20:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_7XIuh.txt 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_7XIuh.txt 00:10:42.943 20:03:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64674 00:10:42.943 20:03:50 -- common/autotest_common.sh@936 -- # '[' -z 64674 ']' 00:10:42.943 20:03:50 -- common/autotest_common.sh@940 -- # kill -0 64674 00:10:42.943 20:03:50 -- common/autotest_common.sh@941 -- # uname 00:10:42.943 20:03:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:42.943 20:03:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64674 00:10:43.201 killing process with pid 64674 00:10:43.201 20:03:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:43.201 20:03:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:43.201 20:03:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64674' 00:10:43.201 20:03:50 -- common/autotest_common.sh@955 -- # kill 64674 00:10:43.201 20:03:50 -- common/autotest_common.sh@960 -- # wait 64674 00:10:44.575 20:03:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:44.575 20:03:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:44.575 00:10:44.575 real 0m5.177s 00:10:44.575 user 0m18.100s 00:10:44.575 sys 0m0.541s 00:10:44.576 ************************************ 00:10:44.576 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:44.576 ************************************ 00:10:44.576 20:03:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.576 20:03:51 -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 20:03:51 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:44.576 20:03:51 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:44.576 20:03:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:44.576 20:03:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.576 20:03:51 -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 ************************************ 00:10:44.576 START TEST nvme_fio 00:10:44.576 ************************************ 00:10:44.576 20:03:51 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:44.576 20:03:51 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:44.576 20:03:51 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:44.576 20:03:51 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:44.576 20:03:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:44.576 20:03:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:44.576 20:03:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:44.576 20:03:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:44.576 20:03:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:44.576 20:03:51 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:44.576 20:03:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:44.576 20:03:51 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:44.576 20:03:51 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:44.576 20:03:51 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:44.576 20:03:51 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:44.576 20:03:51 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:44.576 20:03:52 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:44.576 20:03:52 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:44.836 20:03:52 -- nvme/nvme.sh@41 -- # bs=4096 00:10:44.836 20:03:52 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:44.836 20:03:52 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:44.836 20:03:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:44.836 20:03:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:44.836 20:03:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:44.836 20:03:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:44.836 20:03:52 -- common/autotest_common.sh@1330 -- # shift 00:10:44.836 20:03:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:44.836 20:03:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:44.836 20:03:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:44.836 20:03:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:44.836 20:03:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:44.836 20:03:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:44.836 20:03:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:44.836 20:03:52 -- common/autotest_common.sh@1336 -- # break 00:10:44.836 20:03:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:44.836 20:03:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:45.098 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:45.098 fio-3.35 00:10:45.098 Starting 1 thread 00:10:50.387 00:10:50.387 test: (groupid=0, jobs=1): err= 0: pid=64838: Mon Dec 16 20:03:57 2024 00:10:50.387 read: IOPS=19.2k, BW=74.9MiB/s (78.5MB/s)(150MiB/2001msec) 00:10:50.387 slat (nsec): min=3306, max=95302, avg=5410.49, stdev=2851.80 00:10:50.387 clat (usec): min=1181, max=9207, avg=3308.83, stdev=1244.24 00:10:50.387 lat (usec): min=1186, max=9268, avg=3314.24, stdev=1245.57 00:10:50.387 clat percentiles (usec): 00:10:50.387 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2442], 00:10:50.387 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2966], 00:10:50.387 | 70.00th=[ 3359], 80.00th=[ 4178], 90.00th=[ 5342], 95.00th=[ 6128], 00:10:50.387 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 8455], 99.95th=[ 8717], 00:10:50.387 | 99.99th=[ 9110] 00:10:50.387 bw ( KiB/s): min=73560, max=80512, per=100.00%, avg=77402.67, stdev=3533.54, samples=3 00:10:50.387 iops : min=18390, max=20128, avg=19350.67, stdev=883.39, samples=3 00:10:50.387 write: IOPS=19.2k, BW=74.8MiB/s (78.5MB/s)(150MiB/2001msec); 0 zone resets 00:10:50.387 slat (nsec): min=3383, max=89223, avg=5586.33, stdev=2764.84 00:10:50.387 clat (usec): min=675, max=9140, avg=3345.85, stdev=1256.03 00:10:50.387 lat (usec): min=694, max=9154, avg=3351.44, stdev=1257.33 00:10:50.387 clat percentiles (usec): 00:10:50.387 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2474], 00:10:50.387 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 3032], 00:10:50.387 | 70.00th=[ 3392], 80.00th=[ 4228], 90.00th=[ 5407], 95.00th=[ 6194], 00:10:50.387 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8717], 00:10:50.387 | 99.99th=[ 8979] 00:10:50.387 bw ( KiB/s): min=73512, max=81008, per=100.00%, avg=77496.00, stdev=3770.22, samples=3 00:10:50.387 iops : min=18378, max=20252, avg=19374.00, stdev=942.56, samples=3 00:10:50.387 lat (usec) : 750=0.01% 00:10:50.387 lat (msec) : 2=1.40%, 4=76.76%, 10=21.84% 00:10:50.387 cpu : usr=99.10%, sys=0.10%, ctx=2, majf=0, minf=609 00:10:50.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:50.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.387 issued rwts: total=38363,38327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.388 00:10:50.388 Run status group 0 (all jobs): 00:10:50.388 READ: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:10:50.388 WRITE: bw=74.8MiB/s (78.5MB/s), 74.8MiB/s-74.8MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:10:50.388 ----------------------------------------------------- 00:10:50.388 Suppressions used: 00:10:50.388 count bytes template 00:10:50.388 1 32 /usr/src/fio/parse.c 00:10:50.388 1 8 libtcmalloc_minimal.so 00:10:50.388 ----------------------------------------------------- 00:10:50.388 00:10:50.388 20:03:57 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:50.388 20:03:57 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:50.388 20:03:57 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:50.388 20:03:57 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:50.388 20:03:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:50.388 20:03:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:50.388 20:03:58 -- nvme/nvme.sh@41 -- # bs=4096 00:10:50.388 20:03:58 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:50.388 20:03:58 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:50.388 20:03:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:50.388 20:03:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:50.388 20:03:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:50.388 20:03:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:50.388 20:03:58 -- common/autotest_common.sh@1330 -- # shift 00:10:50.388 20:03:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:50.388 20:03:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:50.388 20:03:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:50.388 20:03:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:50.388 20:03:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:50.648 20:03:58 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:50.648 20:03:58 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:50.648 20:03:58 -- common/autotest_common.sh@1336 -- # break 00:10:50.648 20:03:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:50.648 20:03:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:50.648 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:50.648 fio-3.35 00:10:50.648 Starting 1 thread 00:10:55.936 00:10:55.936 test: (groupid=0, jobs=1): err= 0: pid=64917: Mon Dec 16 20:04:03 2024 00:10:55.936 read: IOPS=18.8k, BW=73.5MiB/s (77.1MB/s)(147MiB/2001msec) 00:10:55.936 slat (nsec): min=3879, max=71070, avg=5576.48, stdev=2545.86 00:10:55.936 clat (usec): min=289, max=10124, avg=3376.45, stdev=1191.07 00:10:55.936 lat (usec): min=294, max=10145, avg=3382.03, stdev=1192.30 00:10:55.936 clat percentiles (usec): 00:10:55.936 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2540], 00:10:55.936 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2999], 60.00th=[ 3163], 00:10:55.936 | 70.00th=[ 3458], 80.00th=[ 4047], 90.00th=[ 5145], 95.00th=[ 5997], 00:10:55.936 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 9241], 99.95th=[ 9503], 00:10:55.936 | 99.99th=[ 9896] 00:10:55.936 bw ( KiB/s): min=67936, max=86232, per=100.00%, avg=75322.67, stdev=9643.28, samples=3 00:10:55.936 iops : min=16984, max=21558, avg=18830.67, stdev=2410.82, samples=3 00:10:55.936 write: IOPS=18.8k, BW=73.5MiB/s (77.1MB/s)(147MiB/2001msec); 0 zone resets 00:10:55.936 slat (usec): min=4, max=102, avg= 5.81, stdev= 2.83 00:10:55.936 clat (usec): min=305, max=10177, avg=3400.86, stdev=1192.70 00:10:55.936 lat (usec): min=310, max=10190, avg=3406.67, stdev=1193.95 00:10:55.936 clat percentiles (usec): 00:10:55.936 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:10:55.936 | 30.00th=[ 2704], 40.00th=[ 2868], 50.00th=[ 3032], 60.00th=[ 3195], 00:10:55.936 | 70.00th=[ 3490], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 5997], 00:10:55.936 | 99.00th=[ 7504], 99.50th=[ 8029], 99.90th=[ 9372], 99.95th=[ 9634], 00:10:55.936 | 99.99th=[ 9896] 00:10:55.936 bw ( KiB/s): min=67992, max=86464, per=100.00%, avg=75368.00, stdev=9781.74, samples=3 00:10:55.936 iops : min=16998, max=21616, avg=18842.00, stdev=2445.44, samples=3 00:10:55.936 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.01% 00:10:55.936 lat (msec) : 2=1.21%, 4=77.99%, 10=20.74%, 20=0.01% 00:10:55.936 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=609 00:10:55.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:55.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.936 issued rwts: total=37651,37662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.936 00:10:55.936 Run status group 0 (all jobs): 00:10:55.936 READ: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=147MiB (154MB), run=2001-2001msec 00:10:55.936 WRITE: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=147MiB (154MB), run=2001-2001msec 00:10:56.198 ----------------------------------------------------- 00:10:56.198 Suppressions used: 00:10:56.198 count bytes template 00:10:56.198 1 32 /usr/src/fio/parse.c 00:10:56.198 1 8 libtcmalloc_minimal.so 00:10:56.198 ----------------------------------------------------- 00:10:56.198 00:10:56.198 20:04:03 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:56.198 20:04:03 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:56.198 20:04:03 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:56.198 20:04:03 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:56.459 20:04:03 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:56.459 20:04:03 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:56.721 20:04:04 -- nvme/nvme.sh@41 -- # bs=4096 00:10:56.721 20:04:04 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:56.721 20:04:04 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:56.721 20:04:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:56.721 20:04:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:56.721 20:04:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:56.721 20:04:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:56.721 20:04:04 -- common/autotest_common.sh@1330 -- # shift 00:10:56.721 20:04:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:56.721 20:04:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:56.721 20:04:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:56.721 20:04:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:56.721 20:04:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:56.721 20:04:04 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:56.721 20:04:04 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:56.721 20:04:04 -- common/autotest_common.sh@1336 -- # break 00:10:56.721 20:04:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:56.721 20:04:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:56.721 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:56.721 fio-3.35 00:10:56.721 Starting 1 thread 00:11:02.015 00:11:02.015 test: (groupid=0, jobs=1): err= 0: pid=64983: Mon Dec 16 20:04:09 2024 00:11:02.015 read: IOPS=14.9k, BW=58.1MiB/s (61.0MB/s)(116MiB/2001msec) 00:11:02.015 slat (nsec): min=4321, max=81297, avg=6539.25, stdev=3982.81 00:11:02.015 clat (usec): min=326, max=12108, avg=4269.46, stdev=1477.61 00:11:02.015 lat (usec): min=332, max=12114, avg=4276.00, stdev=1479.01 00:11:02.015 clat percentiles (usec): 00:11:02.015 | 1.00th=[ 2376], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2999], 00:11:02.015 | 30.00th=[ 3163], 40.00th=[ 3392], 50.00th=[ 3752], 60.00th=[ 4293], 00:11:02.015 | 70.00th=[ 4948], 80.00th=[ 5604], 90.00th=[ 6521], 95.00th=[ 7111], 00:11:02.015 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[10159], 99.95th=[10683], 00:11:02.015 | 99.99th=[11600] 00:11:02.015 bw ( KiB/s): min=50248, max=73008, per=99.38%, avg=59154.67, stdev=12159.63, samples=3 00:11:02.015 iops : min=12562, max=18252, avg=14788.67, stdev=3039.91, samples=3 00:11:02.015 write: IOPS=14.9k, BW=58.2MiB/s (61.0MB/s)(116MiB/2001msec); 0 zone resets 00:11:02.015 slat (usec): min=4, max=114, avg= 6.73, stdev= 3.94 00:11:02.015 clat (usec): min=337, max=11622, avg=4297.04, stdev=1464.25 00:11:02.015 lat (usec): min=342, max=11628, avg=4303.77, stdev=1465.56 00:11:02.015 clat percentiles (usec): 00:11:02.015 | 1.00th=[ 2409], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 3032], 00:11:02.015 | 30.00th=[ 3228], 40.00th=[ 3458], 50.00th=[ 3785], 60.00th=[ 4293], 00:11:02.015 | 70.00th=[ 5014], 80.00th=[ 5669], 90.00th=[ 6521], 95.00th=[ 7111], 00:11:02.015 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9896], 99.95th=[10552], 00:11:02.015 | 99.99th=[11469] 00:11:02.015 bw ( KiB/s): min=49824, max=73104, per=99.06%, avg=58986.67, stdev=12405.69, samples=3 00:11:02.015 iops : min=12456, max=18276, avg=14746.67, stdev=3101.42, samples=3 00:11:02.015 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:11:02.015 lat (msec) : 2=0.19%, 4=54.66%, 10=44.98%, 20=0.11% 00:11:02.015 cpu : usr=98.40%, sys=0.10%, ctx=4, majf=0, minf=608 00:11:02.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:02.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.015 issued rwts: total=29777,29789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.015 00:11:02.015 Run status group 0 (all jobs): 00:11:02.015 READ: bw=58.1MiB/s (61.0MB/s), 58.1MiB/s-58.1MiB/s (61.0MB/s-61.0MB/s), io=116MiB (122MB), run=2001-2001msec 00:11:02.016 WRITE: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=116MiB (122MB), run=2001-2001msec 00:11:02.016 ----------------------------------------------------- 00:11:02.016 Suppressions used: 00:11:02.016 count bytes template 00:11:02.016 1 32 /usr/src/fio/parse.c 00:11:02.016 1 8 libtcmalloc_minimal.so 00:11:02.016 ----------------------------------------------------- 00:11:02.016 00:11:02.016 20:04:09 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:02.016 20:04:09 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:02.016 20:04:09 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:02.016 20:04:09 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:02.016 20:04:09 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:02.016 20:04:09 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:02.276 20:04:09 -- nvme/nvme.sh@41 -- # bs=4096 00:11:02.276 20:04:09 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:02.276 20:04:09 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:02.276 20:04:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:02.276 20:04:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:02.276 20:04:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:02.276 20:04:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:02.276 20:04:09 -- common/autotest_common.sh@1330 -- # shift 00:11:02.276 20:04:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:02.276 20:04:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:02.276 20:04:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:02.276 20:04:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:02.276 20:04:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:02.276 20:04:09 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:02.276 20:04:09 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:02.276 20:04:09 -- common/autotest_common.sh@1336 -- # break 00:11:02.276 20:04:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:02.276 20:04:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:02.537 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:02.537 fio-3.35 00:11:02.537 Starting 1 thread 00:11:09.123 00:11:09.123 test: (groupid=0, jobs=1): err= 0: pid=65071: Mon Dec 16 20:04:15 2024 00:11:09.123 read: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(100MiB/2001msec) 00:11:09.123 slat (nsec): min=6078, max=76875, avg=8508.22, stdev=4600.67 00:11:09.123 clat (usec): min=311, max=12361, avg=4954.62, stdev=1470.76 00:11:09.123 lat (usec): min=317, max=12438, avg=4963.13, stdev=1472.05 00:11:09.123 clat percentiles (usec): 00:11:09.123 | 1.00th=[ 2900], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3490], 00:11:09.123 | 30.00th=[ 3687], 40.00th=[ 4047], 50.00th=[ 4883], 60.00th=[ 5473], 00:11:09.123 | 70.00th=[ 5932], 80.00th=[ 6390], 90.00th=[ 6915], 95.00th=[ 7439], 00:11:09.123 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[10814], 00:11:09.123 | 99.99th=[11863] 00:11:09.123 bw ( KiB/s): min=49816, max=53104, per=100.00%, avg=51909.33, stdev=1818.91, samples=3 00:11:09.123 iops : min=12454, max=13276, avg=12977.33, stdev=454.73, samples=3 00:11:09.123 write: IOPS=12.8k, BW=50.0MiB/s (52.4MB/s)(100MiB/2001msec); 0 zone resets 00:11:09.123 slat (usec): min=6, max=162, avg= 9.01, stdev= 4.80 00:11:09.123 clat (usec): min=486, max=11813, avg=4995.00, stdev=1480.20 00:11:09.123 lat (usec): min=493, max=11830, avg=5004.01, stdev=1481.51 00:11:09.123 clat percentiles (usec): 00:11:09.123 | 1.00th=[ 2900], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3523], 00:11:09.123 | 30.00th=[ 3720], 40.00th=[ 4080], 50.00th=[ 4883], 60.00th=[ 5538], 00:11:09.123 | 70.00th=[ 5932], 80.00th=[ 6456], 90.00th=[ 6980], 95.00th=[ 7439], 00:11:09.123 | 99.00th=[ 8291], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[10683], 00:11:09.123 | 99.99th=[11731] 00:11:09.123 bw ( KiB/s): min=49768, max=53408, per=100.00%, avg=51933.33, stdev=1915.77, samples=3 00:11:09.123 iops : min=12442, max=13352, avg=12983.33, stdev=478.94, samples=3 00:11:09.123 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.02% 00:11:09.123 lat (msec) : 2=0.06%, 4=38.63%, 10=61.18%, 20=0.06% 00:11:09.123 cpu : usr=98.15%, sys=0.20%, ctx=3, majf=0, minf=606 00:11:09.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.123 issued rwts: total=25669,25600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.123 00:11:09.123 Run status group 0 (all jobs): 00:11:09.123 READ: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=100MiB (105MB), run=2001-2001msec 00:11:09.123 WRITE: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=100MiB (105MB), run=2001-2001msec 00:11:09.123 ----------------------------------------------------- 00:11:09.123 Suppressions used: 00:11:09.123 count bytes template 00:11:09.123 1 32 /usr/src/fio/parse.c 00:11:09.123 1 8 libtcmalloc_minimal.so 00:11:09.123 ----------------------------------------------------- 00:11:09.123 00:11:09.123 20:04:15 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:09.123 20:04:15 -- nvme/nvme.sh@46 -- # true 00:11:09.123 00:11:09.123 real 0m24.144s 00:11:09.123 user 0m15.779s 00:11:09.123 sys 0m14.405s 00:11:09.123 20:04:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:09.123 ************************************ 00:11:09.123 END TEST nvme_fio 00:11:09.123 ************************************ 00:11:09.123 20:04:15 -- common/autotest_common.sh@10 -- # set +x 00:11:09.123 00:11:09.123 real 1m38.642s 00:11:09.123 user 3m41.126s 00:11:09.123 sys 0m24.784s 00:11:09.123 20:04:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:09.123 ************************************ 00:11:09.123 END TEST nvme 00:11:09.123 ************************************ 00:11:09.123 20:04:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.123 20:04:16 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:09.123 20:04:16 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:09.123 20:04:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:09.123 20:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.123 20:04:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.123 ************************************ 00:11:09.123 START TEST nvme_scc 00:11:09.123 ************************************ 00:11:09.123 20:04:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:09.123 * Looking for test storage... 00:11:09.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:09.123 20:04:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:09.123 20:04:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:09.123 20:04:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:09.123 20:04:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:09.123 20:04:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:09.123 20:04:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:09.123 20:04:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:09.123 20:04:16 -- scripts/common.sh@335 -- # IFS=.-: 00:11:09.123 20:04:16 -- scripts/common.sh@335 -- # read -ra ver1 00:11:09.123 20:04:16 -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.123 20:04:16 -- scripts/common.sh@336 -- # read -ra ver2 00:11:09.123 20:04:16 -- scripts/common.sh@337 -- # local 'op=<' 00:11:09.123 20:04:16 -- scripts/common.sh@339 -- # ver1_l=2 00:11:09.123 20:04:16 -- scripts/common.sh@340 -- # ver2_l=1 00:11:09.123 20:04:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:09.123 20:04:16 -- scripts/common.sh@343 -- # case "$op" in 00:11:09.123 20:04:16 -- scripts/common.sh@344 -- # : 1 00:11:09.123 20:04:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:09.124 20:04:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.124 20:04:16 -- scripts/common.sh@364 -- # decimal 1 00:11:09.124 20:04:16 -- scripts/common.sh@352 -- # local d=1 00:11:09.124 20:04:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.124 20:04:16 -- scripts/common.sh@354 -- # echo 1 00:11:09.124 20:04:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:09.124 20:04:16 -- scripts/common.sh@365 -- # decimal 2 00:11:09.124 20:04:16 -- scripts/common.sh@352 -- # local d=2 00:11:09.124 20:04:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.124 20:04:16 -- scripts/common.sh@354 -- # echo 2 00:11:09.124 20:04:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:09.124 20:04:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:09.124 20:04:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:09.124 20:04:16 -- scripts/common.sh@367 -- # return 0 00:11:09.124 20:04:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.124 20:04:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.124 --rc genhtml_branch_coverage=1 00:11:09.124 --rc genhtml_function_coverage=1 00:11:09.124 --rc genhtml_legend=1 00:11:09.124 --rc geninfo_all_blocks=1 00:11:09.124 --rc geninfo_unexecuted_blocks=1 00:11:09.124 00:11:09.124 ' 00:11:09.124 20:04:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.124 --rc genhtml_branch_coverage=1 00:11:09.124 --rc genhtml_function_coverage=1 00:11:09.124 --rc genhtml_legend=1 00:11:09.124 --rc geninfo_all_blocks=1 00:11:09.124 --rc geninfo_unexecuted_blocks=1 00:11:09.124 00:11:09.124 ' 00:11:09.124 20:04:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.124 --rc genhtml_branch_coverage=1 00:11:09.124 --rc genhtml_function_coverage=1 00:11:09.124 --rc genhtml_legend=1 00:11:09.124 --rc geninfo_all_blocks=1 00:11:09.124 --rc geninfo_unexecuted_blocks=1 00:11:09.124 00:11:09.124 ' 00:11:09.124 20:04:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.124 --rc genhtml_branch_coverage=1 00:11:09.124 --rc genhtml_function_coverage=1 00:11:09.124 --rc genhtml_legend=1 00:11:09.124 --rc geninfo_all_blocks=1 00:11:09.124 --rc geninfo_unexecuted_blocks=1 00:11:09.124 00:11:09.124 ' 00:11:09.124 20:04:16 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:09.124 20:04:16 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:09.124 20:04:16 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:09.124 20:04:16 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:09.124 20:04:16 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.124 20:04:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.124 20:04:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.124 20:04:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.124 20:04:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.124 20:04:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.124 20:04:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.124 20:04:16 -- paths/export.sh@5 -- # export PATH 00:11:09.124 20:04:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.124 20:04:16 -- nvme/functions.sh@10 -- # ctrls=() 00:11:09.124 20:04:16 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:09.124 20:04:16 -- nvme/functions.sh@11 -- # nvmes=() 00:11:09.124 20:04:16 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:09.124 20:04:16 -- nvme/functions.sh@12 -- # bdfs=() 00:11:09.124 20:04:16 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:09.124 20:04:16 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:09.124 20:04:16 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:09.124 20:04:16 -- nvme/functions.sh@14 -- # nvme_name= 00:11:09.124 20:04:16 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.124 20:04:16 -- nvme/nvme_scc.sh@12 -- # uname 00:11:09.124 20:04:16 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:09.124 20:04:16 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:09.124 20:04:16 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:09.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.384 Waiting for block devices as requested 00:11:09.384 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.384 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.644 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.644 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.942 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:14.942 20:04:22 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:14.942 20:04:22 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:14.942 20:04:22 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.942 20:04:22 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:14.942 20:04:22 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:14.942 20:04:22 -- scripts/common.sh@15 -- # local i 00:11:14.942 20:04:22 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:14.942 20:04:22 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.942 20:04:22 -- scripts/common.sh@24 -- # return 0 00:11:14.942 20:04:22 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:14.942 20:04:22 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:14.942 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.942 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.942 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.942 20:04:22 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.942 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.943 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.943 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.943 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:14.944 20:04:22 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.944 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.944 20:04:22 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:14.944 20:04:22 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:14.945 20:04:22 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:14.945 20:04:22 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:14.945 20:04:22 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:14.945 20:04:22 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.945 20:04:22 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:14.945 20:04:22 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:14.945 20:04:22 -- scripts/common.sh@15 -- # local i 00:11:14.945 20:04:22 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:14.945 20:04:22 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.945 20:04:22 -- scripts/common.sh@24 -- # return 0 00:11:14.945 20:04:22 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:14.945 20:04:22 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:14.945 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.945 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.945 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.945 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.945 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:14.946 20:04:22 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.946 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.946 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:14.947 20:04:22 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.947 20:04:22 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:14.947 20:04:22 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:14.947 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.947 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:14.947 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.947 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.947 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.948 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.948 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.948 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:14.949 20:04:22 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.949 20:04:22 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:14.949 20:04:22 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:14.949 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.949 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.949 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:14.949 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.949 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:14.950 20:04:22 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.950 20:04:22 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:14.950 20:04:22 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:14.950 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.950 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.950 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:14.950 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.950 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.951 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.951 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.951 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:14.952 20:04:22 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:14.952 20:04:22 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:14.952 20:04:22 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:14.952 20:04:22 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:14.952 20:04:22 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.952 20:04:22 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:14.952 20:04:22 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:14.952 20:04:22 -- scripts/common.sh@15 -- # local i 00:11:14.952 20:04:22 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:14.952 20:04:22 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.952 20:04:22 -- scripts/common.sh@24 -- # return 0 00:11:14.952 20:04:22 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:14.952 20:04:22 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:14.952 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.952 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.952 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.952 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:14.952 20:04:22 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.953 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.953 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.953 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:14.954 20:04:22 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.954 20:04:22 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:14.954 20:04:22 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:14.954 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.954 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.954 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.954 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.954 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.955 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.955 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:14.955 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:14.956 20:04:22 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:14.956 20:04:22 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:14.956 20:04:22 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:14.956 20:04:22 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:14.956 20:04:22 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:14.956 20:04:22 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:14.956 20:04:22 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:14.956 20:04:22 -- scripts/common.sh@15 -- # local i 00:11:14.956 20:04:22 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:14.956 20:04:22 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:14.956 20:04:22 -- scripts/common.sh@24 -- # return 0 00:11:14.956 20:04:22 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:14.956 20:04:22 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:14.956 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.956 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.956 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.956 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.956 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.957 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.957 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:14.957 20:04:22 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:14.958 20:04:22 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:14.958 20:04:22 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:14.958 20:04:22 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:14.958 20:04:22 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@18 -- # shift 00:11:14.958 20:04:22 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.958 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.958 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:14.958 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:14.959 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.959 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.959 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.960 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.960 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.960 20:04:22 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:14.960 20:04:22 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # IFS=: 00:11:14.960 20:04:22 -- nvme/functions.sh@21 -- # read -r reg val 00:11:14.960 20:04:22 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:14.960 20:04:22 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:14.960 20:04:22 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:14.960 20:04:22 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:14.960 20:04:22 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:14.960 20:04:22 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:14.960 20:04:22 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:14.960 20:04:22 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:14.960 20:04:22 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:14.960 20:04:22 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:14.960 20:04:22 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:14.960 20:04:22 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:14.960 20:04:22 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:14.960 20:04:22 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:14.960 20:04:22 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:14.960 20:04:22 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # echo nvme1 00:11:14.960 20:04:22 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:14.960 20:04:22 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:14.960 20:04:22 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:14.960 20:04:22 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:14.960 20:04:22 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # echo nvme0 00:11:14.960 20:04:22 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # echo nvme3 00:11:14.960 20:04:22 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:14.960 20:04:22 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:14.960 20:04:22 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:14.960 20:04:22 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:14.960 20:04:22 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:14.960 20:04:22 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:14.960 20:04:22 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:14.960 20:04:22 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:14.960 20:04:22 -- nvme/functions.sh@197 -- # echo nvme2 00:11:14.960 20:04:22 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:14.960 20:04:22 -- nvme/functions.sh@206 -- # echo nvme1 00:11:14.960 20:04:22 -- nvme/functions.sh@207 -- # return 0 00:11:14.960 20:04:22 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:14.960 20:04:22 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:14.960 20:04:22 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.165 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.165 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.165 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.165 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.165 20:04:23 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:16.165 20:04:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:16.165 20:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.165 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.165 ************************************ 00:11:16.165 START TEST nvme_simple_copy 00:11:16.165 ************************************ 00:11:16.165 20:04:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:16.427 Initializing NVMe Controllers 00:11:16.427 Attaching to 0000:00:08.0 00:11:16.427 Controller supports SCC. Attached to 0000:00:08.0 00:11:16.427 Namespace ID: 1 size: 4GB 00:11:16.427 Initialization complete. 00:11:16.427 00:11:16.427 Controller QEMU NVMe Ctrl (12342 ) 00:11:16.427 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:16.427 Namespace Block Size:4096 00:11:16.427 Writing LBAs 0 to 63 with Random Data 00:11:16.427 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:16.427 LBAs matching Written Data: 64 00:11:16.427 00:11:16.427 real 0m0.276s 00:11:16.427 user 0m0.098s 00:11:16.427 sys 0m0.076s 00:11:16.427 20:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.427 ************************************ 00:11:16.427 END TEST nvme_simple_copy 00:11:16.427 ************************************ 00:11:16.427 20:04:24 -- common/autotest_common.sh@10 -- # set +x 00:11:16.689 ************************************ 00:11:16.689 END TEST nvme_scc 00:11:16.689 ************************************ 00:11:16.689 00:11:16.689 real 0m7.999s 00:11:16.689 user 0m1.107s 00:11:16.689 sys 0m1.632s 00:11:16.689 20:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.689 20:04:24 -- common/autotest_common.sh@10 -- # set +x 00:11:16.689 20:04:24 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:16.689 20:04:24 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:16.689 20:04:24 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:16.689 20:04:24 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:16.689 20:04:24 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:16.689 20:04:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.689 20:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.689 20:04:24 -- common/autotest_common.sh@10 -- # set +x 00:11:16.689 ************************************ 00:11:16.689 START TEST nvme_fdp 00:11:16.689 ************************************ 00:11:16.689 20:04:24 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:16.689 * Looking for test storage... 00:11:16.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:16.689 20:04:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:16.689 20:04:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:16.689 20:04:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:16.689 20:04:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:16.689 20:04:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:16.689 20:04:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:16.689 20:04:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:16.689 20:04:24 -- scripts/common.sh@335 -- # IFS=.-: 00:11:16.689 20:04:24 -- scripts/common.sh@335 -- # read -ra ver1 00:11:16.689 20:04:24 -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.689 20:04:24 -- scripts/common.sh@336 -- # read -ra ver2 00:11:16.689 20:04:24 -- scripts/common.sh@337 -- # local 'op=<' 00:11:16.689 20:04:24 -- scripts/common.sh@339 -- # ver1_l=2 00:11:16.689 20:04:24 -- scripts/common.sh@340 -- # ver2_l=1 00:11:16.689 20:04:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:16.689 20:04:24 -- scripts/common.sh@343 -- # case "$op" in 00:11:16.689 20:04:24 -- scripts/common.sh@344 -- # : 1 00:11:16.689 20:04:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:16.689 20:04:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.689 20:04:24 -- scripts/common.sh@364 -- # decimal 1 00:11:16.689 20:04:24 -- scripts/common.sh@352 -- # local d=1 00:11:16.689 20:04:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.689 20:04:24 -- scripts/common.sh@354 -- # echo 1 00:11:16.689 20:04:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:16.689 20:04:24 -- scripts/common.sh@365 -- # decimal 2 00:11:16.689 20:04:24 -- scripts/common.sh@352 -- # local d=2 00:11:16.689 20:04:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.689 20:04:24 -- scripts/common.sh@354 -- # echo 2 00:11:16.689 20:04:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:16.689 20:04:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:16.689 20:04:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:16.689 20:04:24 -- scripts/common.sh@367 -- # return 0 00:11:16.689 20:04:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.689 20:04:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:16.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.689 --rc genhtml_branch_coverage=1 00:11:16.689 --rc genhtml_function_coverage=1 00:11:16.689 --rc genhtml_legend=1 00:11:16.689 --rc geninfo_all_blocks=1 00:11:16.689 --rc geninfo_unexecuted_blocks=1 00:11:16.689 00:11:16.689 ' 00:11:16.689 20:04:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:16.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.690 --rc genhtml_branch_coverage=1 00:11:16.690 --rc genhtml_function_coverage=1 00:11:16.690 --rc genhtml_legend=1 00:11:16.690 --rc geninfo_all_blocks=1 00:11:16.690 --rc geninfo_unexecuted_blocks=1 00:11:16.690 00:11:16.690 ' 00:11:16.690 20:04:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:16.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.690 --rc genhtml_branch_coverage=1 00:11:16.690 --rc genhtml_function_coverage=1 00:11:16.690 --rc genhtml_legend=1 00:11:16.690 --rc geninfo_all_blocks=1 00:11:16.690 --rc geninfo_unexecuted_blocks=1 00:11:16.690 00:11:16.690 ' 00:11:16.690 20:04:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:16.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.690 --rc genhtml_branch_coverage=1 00:11:16.690 --rc genhtml_function_coverage=1 00:11:16.690 --rc genhtml_legend=1 00:11:16.690 --rc geninfo_all_blocks=1 00:11:16.690 --rc geninfo_unexecuted_blocks=1 00:11:16.690 00:11:16.690 ' 00:11:16.690 20:04:24 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.690 20:04:24 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.690 20:04:24 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:16.690 20:04:24 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:16.690 20:04:24 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.690 20:04:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.690 20:04:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.690 20:04:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.690 20:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.690 20:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.690 20:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.690 20:04:24 -- paths/export.sh@5 -- # export PATH 00:11:16.690 20:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.690 20:04:24 -- nvme/functions.sh@10 -- # ctrls=() 00:11:16.690 20:04:24 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:16.690 20:04:24 -- nvme/functions.sh@11 -- # nvmes=() 00:11:16.690 20:04:24 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:16.690 20:04:24 -- nvme/functions.sh@12 -- # bdfs=() 00:11:16.690 20:04:24 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:16.690 20:04:24 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:16.690 20:04:24 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:16.690 20:04:24 -- nvme/functions.sh@14 -- # nvme_name= 00:11:16.690 20:04:24 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.690 20:04:24 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.262 Waiting for block devices as requested 00:11:17.262 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.523 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.523 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.523 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:22.820 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:22.820 20:04:30 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:22.820 20:04:30 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:22.820 20:04:30 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.820 20:04:30 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:22.820 20:04:30 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:22.820 20:04:30 -- scripts/common.sh@15 -- # local i 00:11:22.820 20:04:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:22.820 20:04:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.820 20:04:30 -- scripts/common.sh@24 -- # return 0 00:11:22.820 20:04:30 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:22.820 20:04:30 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:22.820 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.820 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.820 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.820 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.820 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:22.821 20:04:30 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.821 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.821 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:22.822 20:04:30 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:22.822 20:04:30 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:22.822 20:04:30 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:22.822 20:04:30 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:22.822 20:04:30 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:22.822 20:04:30 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.822 20:04:30 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:22.822 20:04:30 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:22.822 20:04:30 -- scripts/common.sh@15 -- # local i 00:11:22.822 20:04:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:22.822 20:04:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.822 20:04:30 -- scripts/common.sh@24 -- # return 0 00:11:22.822 20:04:30 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:22.822 20:04:30 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:22.822 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.822 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.822 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:22.822 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.822 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.823 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:22.823 20:04:30 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.823 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.824 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:22.824 20:04:30 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:22.824 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:22.825 20:04:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.825 20:04:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:22.825 20:04:30 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:22.825 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.825 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:22.825 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.825 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.825 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.826 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.826 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.826 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:22.827 20:04:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.827 20:04:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:22.827 20:04:30 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:22.827 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.827 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:22.827 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.827 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.827 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:22.828 20:04:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.828 20:04:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:22.828 20:04:30 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:22.828 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.828 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:22.828 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.828 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.828 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:22.829 20:04:30 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.829 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.829 20:04:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:22.829 20:04:30 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:22.829 20:04:30 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:22.829 20:04:30 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:22.829 20:04:30 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:22.829 20:04:30 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:22.829 20:04:30 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:22.829 20:04:30 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:22.829 20:04:30 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:22.829 20:04:30 -- scripts/common.sh@15 -- # local i 00:11:22.829 20:04:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:22.829 20:04:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.829 20:04:30 -- scripts/common.sh@24 -- # return 0 00:11:22.829 20:04:30 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:22.829 20:04:30 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:22.829 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.830 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.830 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.830 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:22.830 20:04:30 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.831 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.831 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.831 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:22.832 20:04:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:22.832 20:04:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:22.832 20:04:30 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:22.832 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:22.832 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:22.832 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.832 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.832 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:22.833 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:22.833 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:22.833 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.097 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.097 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.097 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.097 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.097 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.097 20:04:30 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.097 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.097 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.097 20:04:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:23.097 20:04:30 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:23.097 20:04:30 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:23.097 20:04:30 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:23.097 20:04:30 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:23.097 20:04:30 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.097 20:04:30 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:23.097 20:04:30 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:23.098 20:04:30 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:23.098 20:04:30 -- scripts/common.sh@15 -- # local i 00:11:23.098 20:04:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:23.098 20:04:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.098 20:04:30 -- scripts/common.sh@24 -- # return 0 00:11:23.098 20:04:30 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:23.098 20:04:30 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:23.098 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:23.098 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:23.098 20:04:30 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.098 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.098 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.099 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:23.099 20:04:30 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:23.099 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:23.100 20:04:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.100 20:04:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:23.100 20:04:30 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:23.100 20:04:30 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@18 -- # shift 00:11:23.100 20:04:30 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.100 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:23.100 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:23.100 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.101 20:04:30 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # IFS=: 00:11:23.101 20:04:30 -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.101 20:04:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:23.101 20:04:30 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:23.101 20:04:30 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:23.101 20:04:30 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:23.101 20:04:30 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:23.102 20:04:30 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:23.102 20:04:30 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:23.102 20:04:30 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:23.102 20:04:30 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:23.102 20:04:30 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:23.102 20:04:30 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:23.102 20:04:30 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:23.102 20:04:30 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:23.102 20:04:30 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.102 20:04:30 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:23.102 20:04:30 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:23.102 20:04:30 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:23.102 20:04:30 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:23.102 20:04:30 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.102 20:04:30 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.102 20:04:30 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.102 20:04:30 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:23.102 20:04:30 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:23.102 20:04:30 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:23.102 20:04:30 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:23.102 20:04:30 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:23.102 20:04:30 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.102 20:04:30 -- nvme/functions.sh@197 -- # echo nvme0 00:11:23.102 20:04:30 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.102 20:04:30 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:23.102 20:04:30 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:23.102 20:04:30 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:23.102 20:04:30 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:23.102 20:04:30 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.102 20:04:30 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.102 20:04:30 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.102 20:04:30 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:23.102 20:04:30 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:23.102 20:04:30 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:23.102 20:04:30 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:23.102 20:04:30 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:23.102 20:04:30 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:23.102 20:04:30 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:23.102 20:04:30 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:23.102 20:04:30 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:23.102 20:04:30 -- nvme/functions.sh@204 -- # trap - ERR 00:11:23.102 20:04:30 -- nvme/functions.sh@204 -- # print_backtrace 00:11:23.102 20:04:30 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:23.102 20:04:30 -- common/autotest_common.sh@1142 -- # return 0 00:11:23.102 20:04:30 -- nvme/functions.sh@204 -- # trap - ERR 00:11:23.102 20:04:30 -- nvme/functions.sh@204 -- # print_backtrace 00:11:23.102 20:04:30 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:23.102 20:04:30 -- common/autotest_common.sh@1142 -- # return 0 00:11:23.102 20:04:30 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:23.102 20:04:30 -- nvme/functions.sh@206 -- # echo nvme0 00:11:23.102 20:04:30 -- nvme/functions.sh@207 -- # return 0 00:11:23.102 20:04:30 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:23.102 20:04:30 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:23.102 20:04:30 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.045 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.045 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.305 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.305 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.305 20:04:31 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:24.305 20:04:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:24.305 20:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.305 20:04:31 -- common/autotest_common.sh@10 -- # set +x 00:11:24.305 ************************************ 00:11:24.305 START TEST nvme_flexible_data_placement 00:11:24.305 ************************************ 00:11:24.305 20:04:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:24.567 Initializing NVMe Controllers 00:11:24.567 Attaching to 0000:00:09.0 00:11:24.567 Controller supports FDP Attached to 0000:00:09.0 00:11:24.567 Namespace ID: 1 Endurance Group ID: 1 00:11:24.567 Initialization complete. 00:11:24.567 00:11:24.567 ================================== 00:11:24.567 == FDP tests for Namespace: #01 == 00:11:24.567 ================================== 00:11:24.567 00:11:24.567 Get Feature: FDP: 00:11:24.567 ================= 00:11:24.567 Enabled: Yes 00:11:24.567 FDP configuration Index: 0 00:11:24.567 00:11:24.567 FDP configurations log page 00:11:24.567 =========================== 00:11:24.567 Number of FDP configurations: 1 00:11:24.567 Version: 0 00:11:24.568 Size: 112 00:11:24.568 FDP Configuration Descriptor: 0 00:11:24.568 Descriptor Size: 96 00:11:24.568 Reclaim Group Identifier format: 2 00:11:24.568 FDP Volatile Write Cache: Not Present 00:11:24.568 FDP Configuration: Valid 00:11:24.568 Vendor Specific Size: 0 00:11:24.568 Number of Reclaim Groups: 2 00:11:24.568 Number of Recalim Unit Handles: 8 00:11:24.568 Max Placement Identifiers: 128 00:11:24.568 Number of Namespaces Suppprted: 256 00:11:24.568 Reclaim unit Nominal Size: 6000000 bytes 00:11:24.568 Estimated Reclaim Unit Time Limit: Not Reported 00:11:24.568 RUH Desc #000: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #001: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #002: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #003: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #004: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #005: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #006: RUH Type: Initially Isolated 00:11:24.568 RUH Desc #007: RUH Type: Initially Isolated 00:11:24.568 00:11:24.568 FDP reclaim unit handle usage log page 00:11:24.568 ====================================== 00:11:24.568 Number of Reclaim Unit Handles: 8 00:11:24.568 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:24.568 RUH Usage Desc #001: RUH Attributes: Unused 00:11:24.568 RUH Usage Desc #002: RUH Attributes: Unused 00:11:24.568 RUH Usage Desc #003: RUH Attributes: Unused 00:11:24.568 RUH Usage Desc #004: RUH Attributes: Unused 00:11:24.568 RUH Usage Desc #005: RUH Attributes: Unused 00:11:24.568 RUH Usage Desc #006: RUH Attributes: Unused 00:11:24.568 RUH Usage Desc #007: RUH Attributes: Unused 00:11:24.568 00:11:24.568 FDP statistics log page 00:11:24.568 ======================= 00:11:24.568 Host bytes with metadata written: 948473856 00:11:24.568 Media bytes with metadata written: 948772864 00:11:24.568 Media bytes erased: 0 00:11:24.568 00:11:24.568 FDP Reclaim unit handle status 00:11:24.568 ============================== 00:11:24.568 Number of RUHS descriptors: 2 00:11:24.568 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003777 00:11:24.568 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:24.568 00:11:24.568 FDP write on placement id: 0 success 00:11:24.568 00:11:24.568 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:24.568 00:11:24.568 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:24.568 00:11:24.568 Get Feature: FDP Events for Placement handle: #0 00:11:24.568 ======================== 00:11:24.568 Number of FDP Events: 6 00:11:24.568 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:24.568 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:24.568 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:24.568 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:24.568 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:24.568 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:24.568 00:11:24.568 FDP events log page 00:11:24.568 =================== 00:11:24.568 Number of FDP events: 1 00:11:24.568 FDP Event #0: 00:11:24.568 Event Type: RU Not Written to Capacity 00:11:24.568 Placement Identifier: Valid 00:11:24.568 NSID: Valid 00:11:24.568 Location: Valid 00:11:24.568 Placement Identifier: 0 00:11:24.568 Event Timestamp: a 00:11:24.568 Namespace Identifier: 1 00:11:24.568 Reclaim Group Identifier: 0 00:11:24.568 Reclaim Unit Handle Identifier: 0 00:11:24.568 00:11:24.568 FDP test passed 00:11:24.568 00:11:24.568 real 0m0.244s 00:11:24.568 user 0m0.081s 00:11:24.568 sys 0m0.061s 00:11:24.568 ************************************ 00:11:24.568 END TEST nvme_flexible_data_placement 00:11:24.568 ************************************ 00:11:24.568 20:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.568 20:04:32 -- common/autotest_common.sh@10 -- # set +x 00:11:24.568 ************************************ 00:11:24.568 END TEST nvme_fdp 00:11:24.568 ************************************ 00:11:24.568 00:11:24.568 real 0m7.934s 00:11:24.568 user 0m1.103s 00:11:24.568 sys 0m1.601s 00:11:24.568 20:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.568 20:04:32 -- common/autotest_common.sh@10 -- # set +x 00:11:24.568 20:04:32 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:24.568 20:04:32 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:24.568 20:04:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.568 20:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.568 20:04:32 -- common/autotest_common.sh@10 -- # set +x 00:11:24.568 ************************************ 00:11:24.568 START TEST nvme_rpc 00:11:24.568 ************************************ 00:11:24.568 20:04:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:24.829 * Looking for test storage... 00:11:24.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.829 20:04:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:24.829 20:04:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:24.829 20:04:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:24.829 20:04:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:24.829 20:04:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:24.829 20:04:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:24.829 20:04:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:24.829 20:04:32 -- scripts/common.sh@335 -- # IFS=.-: 00:11:24.829 20:04:32 -- scripts/common.sh@335 -- # read -ra ver1 00:11:24.829 20:04:32 -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.829 20:04:32 -- scripts/common.sh@336 -- # read -ra ver2 00:11:24.829 20:04:32 -- scripts/common.sh@337 -- # local 'op=<' 00:11:24.830 20:04:32 -- scripts/common.sh@339 -- # ver1_l=2 00:11:24.830 20:04:32 -- scripts/common.sh@340 -- # ver2_l=1 00:11:24.830 20:04:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:24.830 20:04:32 -- scripts/common.sh@343 -- # case "$op" in 00:11:24.830 20:04:32 -- scripts/common.sh@344 -- # : 1 00:11:24.830 20:04:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:24.830 20:04:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.830 20:04:32 -- scripts/common.sh@364 -- # decimal 1 00:11:24.830 20:04:32 -- scripts/common.sh@352 -- # local d=1 00:11:24.830 20:04:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.830 20:04:32 -- scripts/common.sh@354 -- # echo 1 00:11:24.830 20:04:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:24.830 20:04:32 -- scripts/common.sh@365 -- # decimal 2 00:11:24.830 20:04:32 -- scripts/common.sh@352 -- # local d=2 00:11:24.830 20:04:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.830 20:04:32 -- scripts/common.sh@354 -- # echo 2 00:11:24.830 20:04:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:24.830 20:04:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:24.830 20:04:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:24.830 20:04:32 -- scripts/common.sh@367 -- # return 0 00:11:24.830 20:04:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.830 20:04:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.830 --rc genhtml_branch_coverage=1 00:11:24.830 --rc genhtml_function_coverage=1 00:11:24.830 --rc genhtml_legend=1 00:11:24.830 --rc geninfo_all_blocks=1 00:11:24.830 --rc geninfo_unexecuted_blocks=1 00:11:24.830 00:11:24.830 ' 00:11:24.830 20:04:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.830 --rc genhtml_branch_coverage=1 00:11:24.830 --rc genhtml_function_coverage=1 00:11:24.830 --rc genhtml_legend=1 00:11:24.830 --rc geninfo_all_blocks=1 00:11:24.830 --rc geninfo_unexecuted_blocks=1 00:11:24.830 00:11:24.830 ' 00:11:24.830 20:04:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.830 --rc genhtml_branch_coverage=1 00:11:24.830 --rc genhtml_function_coverage=1 00:11:24.830 --rc genhtml_legend=1 00:11:24.830 --rc geninfo_all_blocks=1 00:11:24.830 --rc geninfo_unexecuted_blocks=1 00:11:24.830 00:11:24.830 ' 00:11:24.830 20:04:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.830 --rc genhtml_branch_coverage=1 00:11:24.830 --rc genhtml_function_coverage=1 00:11:24.830 --rc genhtml_legend=1 00:11:24.830 --rc geninfo_all_blocks=1 00:11:24.830 --rc geninfo_unexecuted_blocks=1 00:11:24.830 00:11:24.830 ' 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:24.830 20:04:32 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:24.830 20:04:32 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:24.830 20:04:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:24.830 20:04:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:24.830 20:04:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:24.830 20:04:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:24.830 20:04:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:24.830 20:04:32 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:24.830 20:04:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:24.830 20:04:32 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:24.830 20:04:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:24.830 20:04:32 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:24.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66513 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:24.830 20:04:32 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66513 00:11:24.830 20:04:32 -- common/autotest_common.sh@829 -- # '[' -z 66513 ']' 00:11:24.830 20:04:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.830 20:04:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.830 20:04:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.830 20:04:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.830 20:04:32 -- common/autotest_common.sh@10 -- # set +x 00:11:24.830 [2024-12-16 20:04:32.460055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:24.830 [2024-12-16 20:04:32.460373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66513 ] 00:11:25.091 [2024-12-16 20:04:32.613264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:25.352 [2024-12-16 20:04:32.831899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:25.352 [2024-12-16 20:04:32.832683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.352 [2024-12-16 20:04:32.832798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.736 20:04:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.736 20:04:34 -- common/autotest_common.sh@862 -- # return 0 00:11:26.736 20:04:34 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:26.736 Nvme0n1 00:11:26.736 20:04:34 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:26.736 20:04:34 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:26.995 request: 00:11:26.995 { 00:11:26.995 "filename": "non_existing_file", 00:11:26.995 "bdev_name": "Nvme0n1", 00:11:26.995 "method": "bdev_nvme_apply_firmware", 00:11:26.995 "req_id": 1 00:11:26.995 } 00:11:26.995 Got JSON-RPC error response 00:11:26.995 response: 00:11:26.995 { 00:11:26.995 "code": -32603, 00:11:26.995 "message": "open file failed." 00:11:26.995 } 00:11:26.995 20:04:34 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:26.995 20:04:34 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:26.995 20:04:34 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:26.995 20:04:34 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:26.995 20:04:34 -- nvme/nvme_rpc.sh@40 -- # killprocess 66513 00:11:26.995 20:04:34 -- common/autotest_common.sh@936 -- # '[' -z 66513 ']' 00:11:26.995 20:04:34 -- common/autotest_common.sh@940 -- # kill -0 66513 00:11:26.995 20:04:34 -- common/autotest_common.sh@941 -- # uname 00:11:26.995 20:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.995 20:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66513 00:11:27.253 killing process with pid 66513 00:11:27.253 20:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:27.253 20:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:27.253 20:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66513' 00:11:27.253 20:04:34 -- common/autotest_common.sh@955 -- # kill 66513 00:11:27.253 20:04:34 -- common/autotest_common.sh@960 -- # wait 66513 00:11:28.189 ************************************ 00:11:28.189 END TEST nvme_rpc 00:11:28.189 ************************************ 00:11:28.189 00:11:28.189 real 0m3.599s 00:11:28.189 user 0m6.750s 00:11:28.189 sys 0m0.599s 00:11:28.189 20:04:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:28.189 20:04:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.189 20:04:35 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:28.189 20:04:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:28.189 20:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.189 20:04:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.189 ************************************ 00:11:28.189 START TEST nvme_rpc_timeouts 00:11:28.189 ************************************ 00:11:28.189 20:04:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:28.450 * Looking for test storage... 00:11:28.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.450 20:04:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:28.450 20:04:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:28.450 20:04:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:28.450 20:04:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:28.450 20:04:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:28.450 20:04:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:28.450 20:04:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:28.450 20:04:35 -- scripts/common.sh@335 -- # IFS=.-: 00:11:28.450 20:04:35 -- scripts/common.sh@335 -- # read -ra ver1 00:11:28.450 20:04:35 -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.450 20:04:35 -- scripts/common.sh@336 -- # read -ra ver2 00:11:28.450 20:04:35 -- scripts/common.sh@337 -- # local 'op=<' 00:11:28.450 20:04:35 -- scripts/common.sh@339 -- # ver1_l=2 00:11:28.450 20:04:35 -- scripts/common.sh@340 -- # ver2_l=1 00:11:28.450 20:04:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:28.450 20:04:35 -- scripts/common.sh@343 -- # case "$op" in 00:11:28.450 20:04:35 -- scripts/common.sh@344 -- # : 1 00:11:28.450 20:04:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:28.450 20:04:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.450 20:04:35 -- scripts/common.sh@364 -- # decimal 1 00:11:28.450 20:04:35 -- scripts/common.sh@352 -- # local d=1 00:11:28.450 20:04:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.450 20:04:35 -- scripts/common.sh@354 -- # echo 1 00:11:28.450 20:04:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:28.450 20:04:35 -- scripts/common.sh@365 -- # decimal 2 00:11:28.450 20:04:35 -- scripts/common.sh@352 -- # local d=2 00:11:28.450 20:04:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.450 20:04:35 -- scripts/common.sh@354 -- # echo 2 00:11:28.450 20:04:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:28.450 20:04:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.450 20:04:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:28.450 20:04:35 -- scripts/common.sh@367 -- # return 0 00:11:28.450 20:04:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.450 20:04:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:28.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.450 --rc genhtml_branch_coverage=1 00:11:28.450 --rc genhtml_function_coverage=1 00:11:28.450 --rc genhtml_legend=1 00:11:28.450 --rc geninfo_all_blocks=1 00:11:28.450 --rc geninfo_unexecuted_blocks=1 00:11:28.450 00:11:28.450 ' 00:11:28.450 20:04:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:28.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.450 --rc genhtml_branch_coverage=1 00:11:28.450 --rc genhtml_function_coverage=1 00:11:28.450 --rc genhtml_legend=1 00:11:28.450 --rc geninfo_all_blocks=1 00:11:28.450 --rc geninfo_unexecuted_blocks=1 00:11:28.450 00:11:28.450 ' 00:11:28.450 20:04:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:28.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.450 --rc genhtml_branch_coverage=1 00:11:28.450 --rc genhtml_function_coverage=1 00:11:28.450 --rc genhtml_legend=1 00:11:28.451 --rc geninfo_all_blocks=1 00:11:28.451 --rc geninfo_unexecuted_blocks=1 00:11:28.451 00:11:28.451 ' 00:11:28.451 20:04:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.451 --rc genhtml_branch_coverage=1 00:11:28.451 --rc genhtml_function_coverage=1 00:11:28.451 --rc genhtml_legend=1 00:11:28.451 --rc geninfo_all_blocks=1 00:11:28.451 --rc geninfo_unexecuted_blocks=1 00:11:28.451 00:11:28.451 ' 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66585 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66585 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66616 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:28.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66616 00:11:28.451 20:04:35 -- common/autotest_common.sh@829 -- # '[' -z 66616 ']' 00:11:28.451 20:04:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.451 20:04:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.451 20:04:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.451 20:04:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.451 20:04:35 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:28.451 20:04:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 [2024-12-16 20:04:36.047862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.451 [2024-12-16 20:04:36.047983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66616 ] 00:11:28.712 [2024-12-16 20:04:36.194658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.712 [2024-12-16 20:04:36.331789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.712 [2024-12-16 20:04:36.332190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.712 [2024-12-16 20:04:36.332216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.283 Checking default timeout settings: 00:11:29.283 20:04:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.283 20:04:36 -- common/autotest_common.sh@862 -- # return 0 00:11:29.283 20:04:36 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:29.283 20:04:36 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:29.544 Making settings changes with rpc: 00:11:29.544 20:04:37 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:29.544 20:04:37 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:29.805 Check default vs. modified settings: 00:11:29.805 20:04:37 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:29.805 20:04:37 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.064 Setting action_on_timeout is changed as expected. 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.064 Setting timeout_us is changed as expected. 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.064 Setting timeout_admin_us is changed as expected. 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66585 /tmp/settings_modified_66585 00:11:30.064 20:04:37 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66616 00:11:30.064 20:04:37 -- common/autotest_common.sh@936 -- # '[' -z 66616 ']' 00:11:30.064 20:04:37 -- common/autotest_common.sh@940 -- # kill -0 66616 00:11:30.064 20:04:37 -- common/autotest_common.sh@941 -- # uname 00:11:30.064 20:04:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.064 20:04:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66616 00:11:30.064 killing process with pid 66616 00:11:30.064 20:04:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:30.064 20:04:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:30.064 20:04:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66616' 00:11:30.064 20:04:37 -- common/autotest_common.sh@955 -- # kill 66616 00:11:30.064 20:04:37 -- common/autotest_common.sh@960 -- # wait 66616 00:11:31.505 RPC TIMEOUT SETTING TEST PASSED. 00:11:31.505 20:04:38 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:31.505 00:11:31.505 real 0m3.016s 00:11:31.505 user 0m5.635s 00:11:31.505 sys 0m0.483s 00:11:31.505 20:04:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:31.505 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 ************************************ 00:11:31.505 END TEST nvme_rpc_timeouts 00:11:31.505 ************************************ 00:11:31.505 20:04:38 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:31.505 20:04:38 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:31.505 20:04:38 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:31.505 20:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.505 20:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.505 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 ************************************ 00:11:31.505 START TEST nvme_xnvme 00:11:31.505 ************************************ 00:11:31.505 20:04:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:31.505 * Looking for test storage... 00:11:31.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:31.505 20:04:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:31.505 20:04:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:31.505 20:04:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:31.505 20:04:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:31.505 20:04:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:31.505 20:04:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:31.505 20:04:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:31.505 20:04:39 -- scripts/common.sh@335 -- # IFS=.-: 00:11:31.505 20:04:39 -- scripts/common.sh@335 -- # read -ra ver1 00:11:31.505 20:04:39 -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.505 20:04:39 -- scripts/common.sh@336 -- # read -ra ver2 00:11:31.505 20:04:39 -- scripts/common.sh@337 -- # local 'op=<' 00:11:31.505 20:04:39 -- scripts/common.sh@339 -- # ver1_l=2 00:11:31.505 20:04:39 -- scripts/common.sh@340 -- # ver2_l=1 00:11:31.505 20:04:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:31.505 20:04:39 -- scripts/common.sh@343 -- # case "$op" in 00:11:31.505 20:04:39 -- scripts/common.sh@344 -- # : 1 00:11:31.505 20:04:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:31.505 20:04:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.505 20:04:39 -- scripts/common.sh@364 -- # decimal 1 00:11:31.505 20:04:39 -- scripts/common.sh@352 -- # local d=1 00:11:31.505 20:04:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.505 20:04:39 -- scripts/common.sh@354 -- # echo 1 00:11:31.505 20:04:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:31.505 20:04:39 -- scripts/common.sh@365 -- # decimal 2 00:11:31.505 20:04:39 -- scripts/common.sh@352 -- # local d=2 00:11:31.505 20:04:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.505 20:04:39 -- scripts/common.sh@354 -- # echo 2 00:11:31.505 20:04:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:31.505 20:04:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:31.505 20:04:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:31.505 20:04:39 -- scripts/common.sh@367 -- # return 0 00:11:31.505 20:04:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.505 20:04:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:31.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.505 --rc genhtml_branch_coverage=1 00:11:31.505 --rc genhtml_function_coverage=1 00:11:31.505 --rc genhtml_legend=1 00:11:31.505 --rc geninfo_all_blocks=1 00:11:31.505 --rc geninfo_unexecuted_blocks=1 00:11:31.505 00:11:31.505 ' 00:11:31.505 20:04:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:31.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.505 --rc genhtml_branch_coverage=1 00:11:31.505 --rc genhtml_function_coverage=1 00:11:31.505 --rc genhtml_legend=1 00:11:31.505 --rc geninfo_all_blocks=1 00:11:31.505 --rc geninfo_unexecuted_blocks=1 00:11:31.505 00:11:31.505 ' 00:11:31.505 20:04:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:31.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.505 --rc genhtml_branch_coverage=1 00:11:31.505 --rc genhtml_function_coverage=1 00:11:31.505 --rc genhtml_legend=1 00:11:31.505 --rc geninfo_all_blocks=1 00:11:31.505 --rc geninfo_unexecuted_blocks=1 00:11:31.505 00:11:31.505 ' 00:11:31.505 20:04:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:31.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.505 --rc genhtml_branch_coverage=1 00:11:31.505 --rc genhtml_function_coverage=1 00:11:31.505 --rc genhtml_legend=1 00:11:31.505 --rc geninfo_all_blocks=1 00:11:31.505 --rc geninfo_unexecuted_blocks=1 00:11:31.505 00:11:31.505 ' 00:11:31.505 20:04:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:31.505 20:04:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.505 20:04:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.505 20:04:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.505 20:04:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.505 20:04:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.505 20:04:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.505 20:04:39 -- paths/export.sh@5 -- # export PATH 00:11:31.505 20:04:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:31.505 20:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.505 20:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.505 20:04:39 -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 ************************************ 00:11:31.505 START TEST xnvme_to_malloc_dd_copy 00:11:31.505 ************************************ 00:11:31.505 20:04:39 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:31.505 20:04:39 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:31.505 20:04:39 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:31.505 20:04:39 -- dd/common.sh@191 -- # return 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@18 -- # local io 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:31.505 20:04:39 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:31.505 20:04:39 -- dd/common.sh@31 -- # xtrace_disable 00:11:31.505 20:04:39 -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 { 00:11:31.505 "subsystems": [ 00:11:31.505 { 00:11:31.505 "subsystem": "bdev", 00:11:31.505 "config": [ 00:11:31.505 { 00:11:31.505 "params": { 00:11:31.505 "block_size": 512, 00:11:31.505 "num_blocks": 2097152, 00:11:31.505 "name": "malloc0" 00:11:31.505 }, 00:11:31.505 "method": "bdev_malloc_create" 00:11:31.505 }, 00:11:31.505 { 00:11:31.505 "params": { 00:11:31.505 "io_mechanism": "libaio", 00:11:31.505 "filename": "/dev/nullb0", 00:11:31.505 "name": "null0" 00:11:31.505 }, 00:11:31.505 "method": "bdev_xnvme_create" 00:11:31.505 }, 00:11:31.505 { 00:11:31.505 "method": "bdev_wait_for_examine" 00:11:31.505 } 00:11:31.505 ] 00:11:31.505 } 00:11:31.505 ] 00:11:31.505 } 00:11:31.766 [2024-12-16 20:04:39.145516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:31.766 [2024-12-16 20:04:39.145786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66745 ] 00:11:31.766 [2024-12-16 20:04:39.299793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.027 [2024-12-16 20:04:39.496933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.941  [2024-12-16T20:04:42.967Z] Copying: 233/1024 [MB] (233 MBps) [2024-12-16T20:04:43.909Z] Copying: 505/1024 [MB] (271 MBps) [2024-12-16T20:04:44.480Z] Copying: 818/1024 [MB] (312 MBps) [2024-12-16T20:04:46.392Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:11:38.752 00:11:38.752 20:04:46 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:38.752 20:04:46 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:38.752 20:04:46 -- dd/common.sh@31 -- # xtrace_disable 00:11:38.752 20:04:46 -- common/autotest_common.sh@10 -- # set +x 00:11:38.752 { 00:11:38.752 "subsystems": [ 00:11:38.752 { 00:11:38.752 "subsystem": "bdev", 00:11:38.752 "config": [ 00:11:38.752 { 00:11:38.752 "params": { 00:11:38.752 "block_size": 512, 00:11:38.752 "num_blocks": 2097152, 00:11:38.752 "name": "malloc0" 00:11:38.752 }, 00:11:38.752 "method": "bdev_malloc_create" 00:11:38.752 }, 00:11:38.752 { 00:11:38.752 "params": { 00:11:38.752 "io_mechanism": "libaio", 00:11:38.752 "filename": "/dev/nullb0", 00:11:38.752 "name": "null0" 00:11:38.752 }, 00:11:38.752 "method": "bdev_xnvme_create" 00:11:38.752 }, 00:11:38.752 { 00:11:38.752 "method": "bdev_wait_for_examine" 00:11:38.752 } 00:11:38.752 ] 00:11:38.752 } 00:11:38.752 ] 00:11:38.752 } 00:11:38.752 [2024-12-16 20:04:46.278407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.752 [2024-12-16 20:04:46.278531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66827 ] 00:11:39.012 [2024-12-16 20:04:46.428292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.012 [2024-12-16 20:04:46.565696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.927  [2024-12-16T20:04:49.509Z] Copying: 315/1024 [MB] (315 MBps) [2024-12-16T20:04:50.452Z] Copying: 630/1024 [MB] (315 MBps) [2024-12-16T20:04:50.713Z] Copying: 944/1024 [MB] (314 MBps) [2024-12-16T20:04:52.624Z] Copying: 1024/1024 [MB] (average 314 MBps) 00:11:44.984 00:11:44.984 20:04:52 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:44.984 20:04:52 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:44.984 20:04:52 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:44.984 20:04:52 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:44.984 20:04:52 -- dd/common.sh@31 -- # xtrace_disable 00:11:44.984 20:04:52 -- common/autotest_common.sh@10 -- # set +x 00:11:44.984 { 00:11:44.984 "subsystems": [ 00:11:44.984 { 00:11:44.984 "subsystem": "bdev", 00:11:44.985 "config": [ 00:11:44.985 { 00:11:44.985 "params": { 00:11:44.985 "block_size": 512, 00:11:44.985 "num_blocks": 2097152, 00:11:44.985 "name": "malloc0" 00:11:44.985 }, 00:11:44.985 "method": "bdev_malloc_create" 00:11:44.985 }, 00:11:44.985 { 00:11:44.985 "params": { 00:11:44.985 "io_mechanism": "io_uring", 00:11:44.985 "filename": "/dev/nullb0", 00:11:44.985 "name": "null0" 00:11:44.985 }, 00:11:44.985 "method": "bdev_xnvme_create" 00:11:44.985 }, 00:11:44.985 { 00:11:44.985 "method": "bdev_wait_for_examine" 00:11:44.985 } 00:11:44.985 ] 00:11:44.985 } 00:11:44.985 ] 00:11:44.985 } 00:11:44.985 [2024-12-16 20:04:52.606951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:44.985 [2024-12-16 20:04:52.607178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66911 ] 00:11:45.245 [2024-12-16 20:04:52.753844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.506 [2024-12-16 20:04:52.897412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.420  [2024-12-16T20:04:56.007Z] Copying: 322/1024 [MB] (322 MBps) [2024-12-16T20:04:56.952Z] Copying: 644/1024 [MB] (322 MBps) [2024-12-16T20:04:56.952Z] Copying: 966/1024 [MB] (322 MBps) [2024-12-16T20:04:58.866Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:11:51.226 00:11:51.226 20:04:58 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:51.226 20:04:58 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:51.226 20:04:58 -- dd/common.sh@31 -- # xtrace_disable 00:11:51.226 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:11:51.226 { 00:11:51.226 "subsystems": [ 00:11:51.226 { 00:11:51.226 "subsystem": "bdev", 00:11:51.226 "config": [ 00:11:51.226 { 00:11:51.226 "params": { 00:11:51.226 "block_size": 512, 00:11:51.226 "num_blocks": 2097152, 00:11:51.226 "name": "malloc0" 00:11:51.226 }, 00:11:51.226 "method": "bdev_malloc_create" 00:11:51.226 }, 00:11:51.226 { 00:11:51.226 "params": { 00:11:51.226 "io_mechanism": "io_uring", 00:11:51.226 "filename": "/dev/nullb0", 00:11:51.226 "name": "null0" 00:11:51.226 }, 00:11:51.226 "method": "bdev_xnvme_create" 00:11:51.226 }, 00:11:51.226 { 00:11:51.226 "method": "bdev_wait_for_examine" 00:11:51.226 } 00:11:51.226 ] 00:11:51.226 } 00:11:51.226 ] 00:11:51.226 } 00:11:51.226 [2024-12-16 20:04:58.837907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:51.226 [2024-12-16 20:04:58.837990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66990 ] 00:11:51.487 [2024-12-16 20:04:58.978095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.487 [2024-12-16 20:04:59.114015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.469  [2024-12-16T20:05:02.052Z] Copying: 327/1024 [MB] (327 MBps) [2024-12-16T20:05:02.995Z] Copying: 655/1024 [MB] (328 MBps) [2024-12-16T20:05:02.995Z] Copying: 984/1024 [MB] (328 MBps) [2024-12-16T20:05:05.541Z] Copying: 1024/1024 [MB] (average 327 MBps) 00:11:57.901 00:11:57.901 20:05:04 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:57.901 20:05:04 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:57.901 ************************************ 00:11:57.901 END TEST xnvme_to_malloc_dd_copy 00:11:57.901 ************************************ 00:11:57.901 00:11:57.901 real 0m25.954s 00:11:57.901 user 0m22.828s 00:11:57.901 sys 0m2.557s 00:11:57.901 20:05:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:57.901 20:05:05 -- common/autotest_common.sh@10 -- # set +x 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:57.901 20:05:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:57.901 20:05:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.901 20:05:05 -- common/autotest_common.sh@10 -- # set +x 00:11:57.901 ************************************ 00:11:57.901 START TEST xnvme_bdevperf 00:11:57.901 ************************************ 00:11:57.901 20:05:05 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:57.901 20:05:05 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:57.901 20:05:05 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:57.901 20:05:05 -- dd/common.sh@191 -- # return 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@60 -- # local io 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:57.901 20:05:05 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:57.901 20:05:05 -- dd/common.sh@31 -- # xtrace_disable 00:11:57.901 20:05:05 -- common/autotest_common.sh@10 -- # set +x 00:11:57.901 { 00:11:57.901 "subsystems": [ 00:11:57.901 { 00:11:57.901 "subsystem": "bdev", 00:11:57.901 "config": [ 00:11:57.901 { 00:11:57.901 "params": { 00:11:57.901 "io_mechanism": "libaio", 00:11:57.901 "filename": "/dev/nullb0", 00:11:57.901 "name": "null0" 00:11:57.901 }, 00:11:57.901 "method": "bdev_xnvme_create" 00:11:57.901 }, 00:11:57.901 { 00:11:57.901 "method": "bdev_wait_for_examine" 00:11:57.901 } 00:11:57.901 ] 00:11:57.901 } 00:11:57.901 ] 00:11:57.901 } 00:11:57.901 [2024-12-16 20:05:05.136237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:57.901 [2024-12-16 20:05:05.136494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67089 ] 00:11:57.901 [2024-12-16 20:05:05.286752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.901 [2024-12-16 20:05:05.482428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.161 Running I/O for 5 seconds... 00:12:03.452 00:12:03.452 Latency(us) 00:12:03.452 [2024-12-16T20:05:11.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.452 [2024-12-16T20:05:11.092Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:03.452 null0 : 5.00 197152.75 770.13 0.00 0.00 322.27 108.70 2709.66 00:12:03.452 [2024-12-16T20:05:11.092Z] =================================================================================================================== 00:12:03.452 [2024-12-16T20:05:11.092Z] Total : 197152.75 770.13 0.00 0.00 322.27 108.70 2709.66 00:12:04.023 20:05:11 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:04.023 20:05:11 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:04.023 20:05:11 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:04.023 20:05:11 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:04.023 20:05:11 -- dd/common.sh@31 -- # xtrace_disable 00:12:04.023 20:05:11 -- common/autotest_common.sh@10 -- # set +x 00:12:04.023 { 00:12:04.023 "subsystems": [ 00:12:04.023 { 00:12:04.023 "subsystem": "bdev", 00:12:04.023 "config": [ 00:12:04.023 { 00:12:04.023 "params": { 00:12:04.023 "io_mechanism": "io_uring", 00:12:04.023 "filename": "/dev/nullb0", 00:12:04.023 "name": "null0" 00:12:04.023 }, 00:12:04.023 "method": "bdev_xnvme_create" 00:12:04.023 }, 00:12:04.023 { 00:12:04.023 "method": "bdev_wait_for_examine" 00:12:04.023 } 00:12:04.023 ] 00:12:04.023 } 00:12:04.023 ] 00:12:04.023 } 00:12:04.023 [2024-12-16 20:05:11.478364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:04.023 [2024-12-16 20:05:11.478590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67163 ] 00:12:04.023 [2024-12-16 20:05:11.625764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.284 [2024-12-16 20:05:11.761100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.546 Running I/O for 5 seconds... 00:12:09.836 00:12:09.836 Latency(us) 00:12:09.836 [2024-12-16T20:05:17.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.836 [2024-12-16T20:05:17.476Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:09.836 null0 : 5.00 238771.20 932.70 0.00 0.00 265.90 148.87 324.53 00:12:09.836 [2024-12-16T20:05:17.476Z] =================================================================================================================== 00:12:09.836 [2024-12-16T20:05:17.476Z] Total : 238771.20 932.70 0.00 0.00 265.90 148.87 324.53 00:12:10.096 20:05:17 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:10.097 20:05:17 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:10.097 ************************************ 00:12:10.097 END TEST xnvme_bdevperf 00:12:10.097 ************************************ 00:12:10.097 00:12:10.097 real 0m12.535s 00:12:10.097 user 0m10.062s 00:12:10.097 sys 0m2.222s 00:12:10.097 20:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.097 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:10.097 00:12:10.097 real 0m38.741s 00:12:10.097 user 0m33.001s 00:12:10.097 sys 0m4.890s 00:12:10.097 20:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.097 ************************************ 00:12:10.097 END TEST nvme_xnvme 00:12:10.097 ************************************ 00:12:10.097 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:10.097 20:05:17 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:10.097 20:05:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.097 20:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.097 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:10.097 ************************************ 00:12:10.097 START TEST blockdev_xnvme 00:12:10.097 ************************************ 00:12:10.097 20:05:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:10.097 * Looking for test storage... 00:12:10.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:10.097 20:05:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:10.097 20:05:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:10.358 20:05:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:10.358 20:05:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:10.358 20:05:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:10.358 20:05:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:10.358 20:05:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:10.358 20:05:17 -- scripts/common.sh@335 -- # IFS=.-: 00:12:10.358 20:05:17 -- scripts/common.sh@335 -- # read -ra ver1 00:12:10.358 20:05:17 -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.358 20:05:17 -- scripts/common.sh@336 -- # read -ra ver2 00:12:10.358 20:05:17 -- scripts/common.sh@337 -- # local 'op=<' 00:12:10.358 20:05:17 -- scripts/common.sh@339 -- # ver1_l=2 00:12:10.358 20:05:17 -- scripts/common.sh@340 -- # ver2_l=1 00:12:10.358 20:05:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:10.358 20:05:17 -- scripts/common.sh@343 -- # case "$op" in 00:12:10.358 20:05:17 -- scripts/common.sh@344 -- # : 1 00:12:10.358 20:05:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:10.358 20:05:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.358 20:05:17 -- scripts/common.sh@364 -- # decimal 1 00:12:10.358 20:05:17 -- scripts/common.sh@352 -- # local d=1 00:12:10.358 20:05:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.358 20:05:17 -- scripts/common.sh@354 -- # echo 1 00:12:10.358 20:05:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:10.358 20:05:17 -- scripts/common.sh@365 -- # decimal 2 00:12:10.358 20:05:17 -- scripts/common.sh@352 -- # local d=2 00:12:10.358 20:05:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.358 20:05:17 -- scripts/common.sh@354 -- # echo 2 00:12:10.358 20:05:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:10.358 20:05:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:10.358 20:05:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:10.358 20:05:17 -- scripts/common.sh@367 -- # return 0 00:12:10.358 20:05:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.358 20:05:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:10.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.358 --rc genhtml_branch_coverage=1 00:12:10.358 --rc genhtml_function_coverage=1 00:12:10.358 --rc genhtml_legend=1 00:12:10.358 --rc geninfo_all_blocks=1 00:12:10.358 --rc geninfo_unexecuted_blocks=1 00:12:10.358 00:12:10.358 ' 00:12:10.358 20:05:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:10.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.358 --rc genhtml_branch_coverage=1 00:12:10.358 --rc genhtml_function_coverage=1 00:12:10.358 --rc genhtml_legend=1 00:12:10.358 --rc geninfo_all_blocks=1 00:12:10.358 --rc geninfo_unexecuted_blocks=1 00:12:10.358 00:12:10.358 ' 00:12:10.358 20:05:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:10.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.358 --rc genhtml_branch_coverage=1 00:12:10.358 --rc genhtml_function_coverage=1 00:12:10.358 --rc genhtml_legend=1 00:12:10.358 --rc geninfo_all_blocks=1 00:12:10.358 --rc geninfo_unexecuted_blocks=1 00:12:10.358 00:12:10.358 ' 00:12:10.358 20:05:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:10.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.358 --rc genhtml_branch_coverage=1 00:12:10.358 --rc genhtml_function_coverage=1 00:12:10.358 --rc genhtml_legend=1 00:12:10.358 --rc geninfo_all_blocks=1 00:12:10.358 --rc geninfo_unexecuted_blocks=1 00:12:10.358 00:12:10.358 ' 00:12:10.358 20:05:17 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:10.358 20:05:17 -- bdev/nbd_common.sh@6 -- # set -e 00:12:10.358 20:05:17 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:10.358 20:05:17 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:10.358 20:05:17 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:10.358 20:05:17 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:10.358 20:05:17 -- bdev/blockdev.sh@18 -- # : 00:12:10.358 20:05:17 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:10.358 20:05:17 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:10.358 20:05:17 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:10.358 20:05:17 -- bdev/blockdev.sh@672 -- # uname -s 00:12:10.358 20:05:17 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:10.358 20:05:17 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:10.358 20:05:17 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:10.358 20:05:17 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:10.358 20:05:17 -- bdev/blockdev.sh@682 -- # dek= 00:12:10.358 20:05:17 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:10.358 20:05:17 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:10.358 20:05:17 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:10.358 20:05:17 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:10.358 20:05:17 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:10.358 20:05:17 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:10.358 20:05:17 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67304 00:12:10.358 20:05:17 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:10.358 20:05:17 -- bdev/blockdev.sh@47 -- # waitforlisten 67304 00:12:10.358 20:05:17 -- common/autotest_common.sh@829 -- # '[' -z 67304 ']' 00:12:10.358 20:05:17 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:10.358 20:05:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.359 20:05:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.359 20:05:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.359 20:05:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.359 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:10.359 [2024-12-16 20:05:17.883348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:10.359 [2024-12-16 20:05:17.883564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67304 ] 00:12:10.619 [2024-12-16 20:05:18.033317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.619 [2024-12-16 20:05:18.168086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.619 [2024-12-16 20:05:18.168412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.191 20:05:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.191 20:05:18 -- common/autotest_common.sh@862 -- # return 0 00:12:11.191 20:05:18 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:11.191 20:05:18 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:11.191 20:05:18 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:11.191 20:05:18 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:11.191 20:05:18 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:11.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.453 Waiting for block devices as requested 00:12:11.713 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.713 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.713 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.713 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.007 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:17.007 20:05:24 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:17.007 20:05:24 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:17.007 20:05:24 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:17.007 20:05:24 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:17.007 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.007 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:17.007 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:17.007 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.007 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:17.007 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:17.007 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.007 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:17.007 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:17.007 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.007 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:17.007 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:17.007 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.007 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.007 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:17.007 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:17.008 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:17.008 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.008 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.008 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:17.008 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:17.008 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:17.008 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.008 20:05:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:17.008 20:05:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:17.008 20:05:24 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:17.008 20:05:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:17.008 20:05:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:17.008 20:05:24 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:17.008 20:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.008 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 20:05:24 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:17.008 nvme0n1 00:12:17.008 nvme1n1 00:12:17.008 nvme1n2 00:12:17.008 nvme1n3 00:12:17.008 nvme2n1 00:12:17.008 nvme3n1 00:12:17.008 20:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:17.008 20:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.008 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 20:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@738 -- # cat 00:12:17.008 20:05:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:17.008 20:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.008 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 20:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:17.008 20:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.008 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 20:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:17.008 20:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.008 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 20:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:17.008 20:05:24 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:17.008 20:05:24 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:17.008 20:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.008 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 20:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.008 20:05:24 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:17.008 20:05:24 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:17.008 20:05:24 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8b6e77a4-1707-41e2-a0c0-2364fd1b1129"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8b6e77a4-1707-41e2-a0c0-2364fd1b1129",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d61e70b6-eeb1-4e18-a176-2d64ec31249b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d61e70b6-eeb1-4e18-a176-2d64ec31249b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "e213af7f-5fc4-4201-8ee5-13c15db247b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e213af7f-5fc4-4201-8ee5-13c15db247b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "62a59a6c-cc26-4364-8c68-f8a855c983b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "62a59a6c-cc26-4364-8c68-f8a855c983b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "316c982f-5da7-4edc-9b83-fba0bbac543d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "316c982f-5da7-4edc-9b83-fba0bbac543d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "93abceef-f43c-40c5-8e19-a10b2c644b0e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "93abceef-f43c-40c5-8e19-a10b2c644b0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:17.008 20:05:24 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:17.008 20:05:24 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:17.008 20:05:24 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:17.008 20:05:24 -- bdev/blockdev.sh@752 -- # killprocess 67304 00:12:17.008 20:05:24 -- common/autotest_common.sh@936 -- # '[' -z 67304 ']' 00:12:17.008 20:05:24 -- common/autotest_common.sh@940 -- # kill -0 67304 00:12:17.008 20:05:24 -- common/autotest_common.sh@941 -- # uname 00:12:17.008 20:05:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:17.008 20:05:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67304 00:12:17.008 killing process with pid 67304 00:12:17.008 20:05:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:17.008 20:05:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:17.008 20:05:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67304' 00:12:17.008 20:05:24 -- common/autotest_common.sh@955 -- # kill 67304 00:12:17.008 20:05:24 -- common/autotest_common.sh@960 -- # wait 67304 00:12:18.392 20:05:25 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:18.392 20:05:25 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:18.392 20:05:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:18.392 20:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.392 20:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:18.392 ************************************ 00:12:18.392 START TEST bdev_hello_world 00:12:18.392 ************************************ 00:12:18.392 20:05:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:18.392 [2024-12-16 20:05:25.817926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:18.392 [2024-12-16 20:05:25.818039] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67679 ] 00:12:18.392 [2024-12-16 20:05:25.965510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.653 [2024-12-16 20:05:26.109201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.914 [2024-12-16 20:05:26.390088] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:18.914 [2024-12-16 20:05:26.390127] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:18.914 [2024-12-16 20:05:26.390139] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:18.914 [2024-12-16 20:05:26.391599] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:18.914 [2024-12-16 20:05:26.391890] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:18.914 [2024-12-16 20:05:26.391906] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:18.914 [2024-12-16 20:05:26.392227] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:18.914 00:12:18.914 [2024-12-16 20:05:26.392245] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:19.485 00:12:19.485 real 0m1.241s 00:12:19.485 user 0m0.978s 00:12:19.485 sys 0m0.152s 00:12:19.485 20:05:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:19.485 ************************************ 00:12:19.485 20:05:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.485 END TEST bdev_hello_world 00:12:19.486 ************************************ 00:12:19.486 20:05:27 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:19.486 20:05:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:19.486 20:05:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.486 20:05:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.486 ************************************ 00:12:19.486 START TEST bdev_bounds 00:12:19.486 ************************************ 00:12:19.486 20:05:27 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:19.486 20:05:27 -- bdev/blockdev.sh@288 -- # bdevio_pid=67716 00:12:19.486 20:05:27 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:19.486 Process bdevio pid: 67716 00:12:19.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.486 20:05:27 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67716' 00:12:19.486 20:05:27 -- bdev/blockdev.sh@291 -- # waitforlisten 67716 00:12:19.486 20:05:27 -- common/autotest_common.sh@829 -- # '[' -z 67716 ']' 00:12:19.486 20:05:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.486 20:05:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.486 20:05:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.486 20:05:27 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:19.486 20:05:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.486 20:05:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.486 [2024-12-16 20:05:27.098869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:19.486 [2024-12-16 20:05:27.098957] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67716 ] 00:12:19.746 [2024-12-16 20:05:27.238790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.746 [2024-12-16 20:05:27.384224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.746 [2024-12-16 20:05:27.384583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.746 [2024-12-16 20:05:27.384584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.318 20:05:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.318 20:05:27 -- common/autotest_common.sh@862 -- # return 0 00:12:20.318 20:05:27 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:20.579 I/O targets: 00:12:20.579 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:20.579 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:20.579 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:20.579 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:20.579 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:20.579 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:20.579 00:12:20.579 00:12:20.579 CUnit - A unit testing framework for C - Version 2.1-3 00:12:20.579 http://cunit.sourceforge.net/ 00:12:20.579 00:12:20.579 00:12:20.579 Suite: bdevio tests on: nvme3n1 00:12:20.579 Test: blockdev write read block ...passed 00:12:20.579 Test: blockdev write zeroes read block ...passed 00:12:20.579 Test: blockdev write zeroes read no split ...passed 00:12:20.579 Test: blockdev write zeroes read split ...passed 00:12:20.579 Test: blockdev write zeroes read split partial ...passed 00:12:20.579 Test: blockdev reset ...passed 00:12:20.579 Test: blockdev write read 8 blocks ...passed 00:12:20.579 Test: blockdev write read size > 128k ...passed 00:12:20.579 Test: blockdev write read invalid size ...passed 00:12:20.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.579 Test: blockdev write read max offset ...passed 00:12:20.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.579 Test: blockdev writev readv 8 blocks ...passed 00:12:20.579 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.579 Test: blockdev writev readv block ...passed 00:12:20.579 Test: blockdev writev readv size > 128k ...passed 00:12:20.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.579 Test: blockdev comparev and writev ...passed 00:12:20.579 Test: blockdev nvme passthru rw ...passed 00:12:20.579 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.579 Test: blockdev nvme admin passthru ...passed 00:12:20.579 Test: blockdev copy ...passed 00:12:20.579 Suite: bdevio tests on: nvme2n1 00:12:20.579 Test: blockdev write read block ...passed 00:12:20.579 Test: blockdev write zeroes read block ...passed 00:12:20.579 Test: blockdev write zeroes read no split ...passed 00:12:20.579 Test: blockdev write zeroes read split ...passed 00:12:20.579 Test: blockdev write zeroes read split partial ...passed 00:12:20.579 Test: blockdev reset ...passed 00:12:20.579 Test: blockdev write read 8 blocks ...passed 00:12:20.579 Test: blockdev write read size > 128k ...passed 00:12:20.579 Test: blockdev write read invalid size ...passed 00:12:20.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.579 Test: blockdev write read max offset ...passed 00:12:20.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.579 Test: blockdev writev readv 8 blocks ...passed 00:12:20.579 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.579 Test: blockdev writev readv block ...passed 00:12:20.579 Test: blockdev writev readv size > 128k ...passed 00:12:20.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.579 Test: blockdev comparev and writev ...passed 00:12:20.579 Test: blockdev nvme passthru rw ...passed 00:12:20.579 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.579 Test: blockdev nvme admin passthru ...passed 00:12:20.579 Test: blockdev copy ...passed 00:12:20.579 Suite: bdevio tests on: nvme1n3 00:12:20.579 Test: blockdev write read block ...passed 00:12:20.579 Test: blockdev write zeroes read block ...passed 00:12:20.579 Test: blockdev write zeroes read no split ...passed 00:12:20.579 Test: blockdev write zeroes read split ...passed 00:12:20.579 Test: blockdev write zeroes read split partial ...passed 00:12:20.579 Test: blockdev reset ...passed 00:12:20.579 Test: blockdev write read 8 blocks ...passed 00:12:20.579 Test: blockdev write read size > 128k ...passed 00:12:20.579 Test: blockdev write read invalid size ...passed 00:12:20.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.579 Test: blockdev write read max offset ...passed 00:12:20.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.579 Test: blockdev writev readv 8 blocks ...passed 00:12:20.579 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.580 Test: blockdev writev readv block ...passed 00:12:20.580 Test: blockdev writev readv size > 128k ...passed 00:12:20.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.580 Test: blockdev comparev and writev ...passed 00:12:20.580 Test: blockdev nvme passthru rw ...passed 00:12:20.580 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.580 Test: blockdev nvme admin passthru ...passed 00:12:20.580 Test: blockdev copy ...passed 00:12:20.580 Suite: bdevio tests on: nvme1n2 00:12:20.580 Test: blockdev write read block ...passed 00:12:20.580 Test: blockdev write zeroes read block ...passed 00:12:20.580 Test: blockdev write zeroes read no split ...passed 00:12:20.580 Test: blockdev write zeroes read split ...passed 00:12:20.580 Test: blockdev write zeroes read split partial ...passed 00:12:20.580 Test: blockdev reset ...passed 00:12:20.580 Test: blockdev write read 8 blocks ...passed 00:12:20.580 Test: blockdev write read size > 128k ...passed 00:12:20.580 Test: blockdev write read invalid size ...passed 00:12:20.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.580 Test: blockdev write read max offset ...passed 00:12:20.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.580 Test: blockdev writev readv 8 blocks ...passed 00:12:20.580 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.580 Test: blockdev writev readv block ...passed 00:12:20.580 Test: blockdev writev readv size > 128k ...passed 00:12:20.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.580 Test: blockdev comparev and writev ...passed 00:12:20.580 Test: blockdev nvme passthru rw ...passed 00:12:20.580 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.580 Test: blockdev nvme admin passthru ...passed 00:12:20.580 Test: blockdev copy ...passed 00:12:20.580 Suite: bdevio tests on: nvme1n1 00:12:20.580 Test: blockdev write read block ...passed 00:12:20.580 Test: blockdev write zeroes read block ...passed 00:12:20.580 Test: blockdev write zeroes read no split ...passed 00:12:20.580 Test: blockdev write zeroes read split ...passed 00:12:20.841 Test: blockdev write zeroes read split partial ...passed 00:12:20.841 Test: blockdev reset ...passed 00:12:20.841 Test: blockdev write read 8 blocks ...passed 00:12:20.841 Test: blockdev write read size > 128k ...passed 00:12:20.841 Test: blockdev write read invalid size ...passed 00:12:20.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.841 Test: blockdev write read max offset ...passed 00:12:20.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.841 Test: blockdev writev readv 8 blocks ...passed 00:12:20.841 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.841 Test: blockdev writev readv block ...passed 00:12:20.841 Test: blockdev writev readv size > 128k ...passed 00:12:20.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.841 Test: blockdev comparev and writev ...passed 00:12:20.841 Test: blockdev nvme passthru rw ...passed 00:12:20.841 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.841 Test: blockdev nvme admin passthru ...passed 00:12:20.841 Test: blockdev copy ...passed 00:12:20.841 Suite: bdevio tests on: nvme0n1 00:12:20.841 Test: blockdev write read block ...passed 00:12:20.841 Test: blockdev write zeroes read block ...passed 00:12:20.841 Test: blockdev write zeroes read no split ...passed 00:12:20.841 Test: blockdev write zeroes read split ...passed 00:12:20.841 Test: blockdev write zeroes read split partial ...passed 00:12:20.841 Test: blockdev reset ...passed 00:12:20.842 Test: blockdev write read 8 blocks ...passed 00:12:20.842 Test: blockdev write read size > 128k ...passed 00:12:20.842 Test: blockdev write read invalid size ...passed 00:12:20.842 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.842 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.842 Test: blockdev write read max offset ...passed 00:12:20.842 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.842 Test: blockdev writev readv 8 blocks ...passed 00:12:20.842 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.842 Test: blockdev writev readv block ...passed 00:12:20.842 Test: blockdev writev readv size > 128k ...passed 00:12:20.842 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.842 Test: blockdev comparev and writev ...passed 00:12:20.842 Test: blockdev nvme passthru rw ...passed 00:12:20.842 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.842 Test: blockdev nvme admin passthru ...passed 00:12:20.842 Test: blockdev copy ...passed 00:12:20.842 00:12:20.842 Run Summary: Type Total Ran Passed Failed Inactive 00:12:20.842 suites 6 6 n/a 0 0 00:12:20.842 tests 138 138 138 0 0 00:12:20.842 asserts 780 780 780 0 n/a 00:12:20.842 00:12:20.842 Elapsed time = 0.882 seconds 00:12:20.842 0 00:12:20.842 20:05:28 -- bdev/blockdev.sh@293 -- # killprocess 67716 00:12:20.842 20:05:28 -- common/autotest_common.sh@936 -- # '[' -z 67716 ']' 00:12:20.842 20:05:28 -- common/autotest_common.sh@940 -- # kill -0 67716 00:12:20.842 20:05:28 -- common/autotest_common.sh@941 -- # uname 00:12:20.842 20:05:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:20.842 20:05:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67716 00:12:20.842 20:05:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:20.842 20:05:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:20.842 20:05:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67716' 00:12:20.842 killing process with pid 67716 00:12:20.842 20:05:28 -- common/autotest_common.sh@955 -- # kill 67716 00:12:20.842 20:05:28 -- common/autotest_common.sh@960 -- # wait 67716 00:12:21.414 20:05:28 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:21.414 00:12:21.414 real 0m1.903s 00:12:21.414 user 0m4.561s 00:12:21.414 sys 0m0.249s 00:12:21.414 20:05:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.414 20:05:28 -- common/autotest_common.sh@10 -- # set +x 00:12:21.414 ************************************ 00:12:21.414 END TEST bdev_bounds 00:12:21.414 ************************************ 00:12:21.414 20:05:28 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:21.414 20:05:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:21.414 20:05:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.414 20:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:21.414 ************************************ 00:12:21.414 START TEST bdev_nbd 00:12:21.414 ************************************ 00:12:21.414 20:05:29 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:21.414 20:05:29 -- bdev/blockdev.sh@298 -- # uname -s 00:12:21.414 20:05:29 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:21.414 20:05:29 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.414 20:05:29 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:21.414 20:05:29 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:21.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:21.414 20:05:29 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:21.414 20:05:29 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:21.414 20:05:29 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:21.414 20:05:29 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:21.414 20:05:29 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:21.414 20:05:29 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:21.414 20:05:29 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:21.414 20:05:29 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:21.414 20:05:29 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:21.414 20:05:29 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:21.414 20:05:29 -- bdev/blockdev.sh@316 -- # nbd_pid=67765 00:12:21.414 20:05:29 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:21.414 20:05:29 -- bdev/blockdev.sh@318 -- # waitforlisten 67765 /var/tmp/spdk-nbd.sock 00:12:21.414 20:05:29 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:21.414 20:05:29 -- common/autotest_common.sh@829 -- # '[' -z 67765 ']' 00:12:21.414 20:05:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:21.414 20:05:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.414 20:05:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:21.414 20:05:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.414 20:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:21.676 [2024-12-16 20:05:29.075139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.676 [2024-12-16 20:05:29.075462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.676 [2024-12-16 20:05:29.222002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.937 [2024-12-16 20:05:29.357465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.509 20:05:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.509 20:05:29 -- common/autotest_common.sh@862 -- # return 0 00:12:22.509 20:05:29 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@24 -- # local i 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.509 20:05:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:22.509 20:05:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:22.509 20:05:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:22.509 20:05:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:22.509 20:05:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:22.509 20:05:30 -- common/autotest_common.sh@867 -- # local i 00:12:22.509 20:05:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:22.509 20:05:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:22.509 20:05:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:22.509 20:05:30 -- common/autotest_common.sh@871 -- # break 00:12:22.509 20:05:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:22.509 20:05:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:22.509 20:05:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.509 1+0 records in 00:12:22.509 1+0 records out 00:12:22.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382019 s, 10.7 MB/s 00:12:22.509 20:05:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.509 20:05:30 -- common/autotest_common.sh@884 -- # size=4096 00:12:22.509 20:05:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.509 20:05:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:22.509 20:05:30 -- common/autotest_common.sh@887 -- # return 0 00:12:22.509 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.509 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.509 20:05:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:22.770 20:05:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:22.770 20:05:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:22.770 20:05:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:22.770 20:05:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:22.770 20:05:30 -- common/autotest_common.sh@867 -- # local i 00:12:22.770 20:05:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:22.770 20:05:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:22.770 20:05:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:22.770 20:05:30 -- common/autotest_common.sh@871 -- # break 00:12:22.770 20:05:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:22.770 20:05:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:22.770 20:05:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.770 1+0 records in 00:12:22.770 1+0 records out 00:12:22.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480765 s, 8.5 MB/s 00:12:22.770 20:05:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.770 20:05:30 -- common/autotest_common.sh@884 -- # size=4096 00:12:22.770 20:05:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.770 20:05:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:22.770 20:05:30 -- common/autotest_common.sh@887 -- # return 0 00:12:22.770 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.770 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.770 20:05:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:23.043 20:05:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:23.043 20:05:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:23.043 20:05:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:23.043 20:05:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:23.043 20:05:30 -- common/autotest_common.sh@867 -- # local i 00:12:23.043 20:05:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:23.043 20:05:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:23.043 20:05:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:23.043 20:05:30 -- common/autotest_common.sh@871 -- # break 00:12:23.043 20:05:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:23.043 20:05:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:23.043 20:05:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.043 1+0 records in 00:12:23.043 1+0 records out 00:12:23.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225431 s, 18.2 MB/s 00:12:23.043 20:05:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.043 20:05:30 -- common/autotest_common.sh@884 -- # size=4096 00:12:23.043 20:05:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.043 20:05:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:23.043 20:05:30 -- common/autotest_common.sh@887 -- # return 0 00:12:23.043 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:23.043 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:23.043 20:05:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:23.372 20:05:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:23.372 20:05:30 -- common/autotest_common.sh@867 -- # local i 00:12:23.372 20:05:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:23.372 20:05:30 -- common/autotest_common.sh@871 -- # break 00:12:23.372 20:05:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.372 1+0 records in 00:12:23.372 1+0 records out 00:12:23.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376644 s, 10.9 MB/s 00:12:23.372 20:05:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.372 20:05:30 -- common/autotest_common.sh@884 -- # size=4096 00:12:23.372 20:05:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.372 20:05:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:23.372 20:05:30 -- common/autotest_common.sh@887 -- # return 0 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:23.372 20:05:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:23.372 20:05:30 -- common/autotest_common.sh@867 -- # local i 00:12:23.372 20:05:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:23.372 20:05:30 -- common/autotest_common.sh@871 -- # break 00:12:23.372 20:05:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:23.372 20:05:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.372 1+0 records in 00:12:23.372 1+0 records out 00:12:23.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038826 s, 10.5 MB/s 00:12:23.372 20:05:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.372 20:05:30 -- common/autotest_common.sh@884 -- # size=4096 00:12:23.372 20:05:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.372 20:05:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:23.372 20:05:30 -- common/autotest_common.sh@887 -- # return 0 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:23.372 20:05:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:23.633 20:05:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:23.633 20:05:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:23.633 20:05:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:23.633 20:05:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:23.633 20:05:31 -- common/autotest_common.sh@867 -- # local i 00:12:23.633 20:05:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:23.633 20:05:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:23.633 20:05:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:23.633 20:05:31 -- common/autotest_common.sh@871 -- # break 00:12:23.633 20:05:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:23.633 20:05:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:23.633 20:05:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.633 1+0 records in 00:12:23.633 1+0 records out 00:12:23.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056596 s, 7.2 MB/s 00:12:23.633 20:05:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.633 20:05:31 -- common/autotest_common.sh@884 -- # size=4096 00:12:23.633 20:05:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.633 20:05:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:23.633 20:05:31 -- common/autotest_common.sh@887 -- # return 0 00:12:23.633 20:05:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:23.633 20:05:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:23.633 20:05:31 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.894 20:05:31 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd0", 00:12:23.894 "bdev_name": "nvme0n1" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd1", 00:12:23.894 "bdev_name": "nvme1n1" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd2", 00:12:23.894 "bdev_name": "nvme1n2" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd3", 00:12:23.894 "bdev_name": "nvme1n3" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd4", 00:12:23.894 "bdev_name": "nvme2n1" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd5", 00:12:23.894 "bdev_name": "nvme3n1" 00:12:23.894 } 00:12:23.894 ]' 00:12:23.894 20:05:31 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:23.894 20:05:31 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:23.894 20:05:31 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd0", 00:12:23.894 "bdev_name": "nvme0n1" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd1", 00:12:23.894 "bdev_name": "nvme1n1" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd2", 00:12:23.894 "bdev_name": "nvme1n2" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd3", 00:12:23.894 "bdev_name": "nvme1n3" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd4", 00:12:23.894 "bdev_name": "nvme2n1" 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "nbd_device": "/dev/nbd5", 00:12:23.894 "bdev_name": "nvme3n1" 00:12:23.894 } 00:12:23.894 ]' 00:12:23.894 20:05:31 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:23.894 20:05:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.895 20:05:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:23.895 20:05:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.895 20:05:31 -- bdev/nbd_common.sh@51 -- # local i 00:12:23.895 20:05:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.895 20:05:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@41 -- # break 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.156 20:05:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@41 -- # break 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@41 -- # break 00:12:24.417 20:05:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.417 20:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.417 20:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@41 -- # break 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.678 20:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@41 -- # break 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@41 -- # break 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.939 20:05:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@65 -- # true 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@65 -- # count=0 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@122 -- # count=0 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:25.200 20:05:32 -- bdev/nbd_common.sh@127 -- # return 0 00:12:25.200 20:05:32 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@12 -- # local i 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.201 20:05:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:25.461 /dev/nbd0 00:12:25.461 20:05:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.461 20:05:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.461 20:05:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:25.461 20:05:32 -- common/autotest_common.sh@867 -- # local i 00:12:25.461 20:05:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.461 20:05:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.462 20:05:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:25.462 20:05:32 -- common/autotest_common.sh@871 -- # break 00:12:25.462 20:05:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.462 20:05:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.462 20:05:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.462 1+0 records in 00:12:25.462 1+0 records out 00:12:25.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028033 s, 14.6 MB/s 00:12:25.462 20:05:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.462 20:05:32 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.462 20:05:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.462 20:05:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.462 20:05:32 -- common/autotest_common.sh@887 -- # return 0 00:12:25.462 20:05:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.462 20:05:32 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.462 20:05:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:25.723 /dev/nbd1 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.723 20:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:25.723 20:05:33 -- common/autotest_common.sh@867 -- # local i 00:12:25.723 20:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:25.723 20:05:33 -- common/autotest_common.sh@871 -- # break 00:12:25.723 20:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.723 1+0 records in 00:12:25.723 1+0 records out 00:12:25.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204641 s, 20.0 MB/s 00:12:25.723 20:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.723 20:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.723 20:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.723 20:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.723 20:05:33 -- common/autotest_common.sh@887 -- # return 0 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:25.723 /dev/nbd10 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:25.723 20:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:25.723 20:05:33 -- common/autotest_common.sh@867 -- # local i 00:12:25.723 20:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:25.723 20:05:33 -- common/autotest_common.sh@871 -- # break 00:12:25.723 20:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.723 20:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.723 1+0 records in 00:12:25.723 1+0 records out 00:12:25.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334801 s, 12.2 MB/s 00:12:25.723 20:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.723 20:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.723 20:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.723 20:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.723 20:05:33 -- common/autotest_common.sh@887 -- # return 0 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.723 20:05:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:25.985 /dev/nbd11 00:12:25.985 20:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:25.985 20:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:25.985 20:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:25.985 20:05:33 -- common/autotest_common.sh@867 -- # local i 00:12:25.985 20:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.985 20:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.985 20:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:25.985 20:05:33 -- common/autotest_common.sh@871 -- # break 00:12:25.985 20:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.985 20:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.985 20:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.985 1+0 records in 00:12:25.985 1+0 records out 00:12:25.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291068 s, 14.1 MB/s 00:12:25.985 20:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.985 20:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.985 20:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.985 20:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.985 20:05:33 -- common/autotest_common.sh@887 -- # return 0 00:12:25.985 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.985 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.985 20:05:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:26.246 /dev/nbd12 00:12:26.246 20:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:26.246 20:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:26.246 20:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:26.246 20:05:33 -- common/autotest_common.sh@867 -- # local i 00:12:26.246 20:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.246 20:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.246 20:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:26.246 20:05:33 -- common/autotest_common.sh@871 -- # break 00:12:26.246 20:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.246 20:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.246 20:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.246 1+0 records in 00:12:26.246 1+0 records out 00:12:26.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501904 s, 8.2 MB/s 00:12:26.246 20:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.246 20:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:12:26.246 20:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.246 20:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.246 20:05:33 -- common/autotest_common.sh@887 -- # return 0 00:12:26.246 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.246 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:26.246 20:05:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:26.507 /dev/nbd13 00:12:26.507 20:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:26.507 20:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:26.507 20:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:26.507 20:05:33 -- common/autotest_common.sh@867 -- # local i 00:12:26.507 20:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.507 20:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.507 20:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:26.507 20:05:33 -- common/autotest_common.sh@871 -- # break 00:12:26.507 20:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.507 20:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.507 20:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.507 1+0 records in 00:12:26.507 1+0 records out 00:12:26.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310261 s, 13.2 MB/s 00:12:26.507 20:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.508 20:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:12:26.508 20:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.508 20:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.508 20:05:33 -- common/autotest_common.sh@887 -- # return 0 00:12:26.508 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.508 20:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:26.508 20:05:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:26.508 20:05:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.508 20:05:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:26.769 20:05:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd0", 00:12:26.769 "bdev_name": "nvme0n1" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd1", 00:12:26.769 "bdev_name": "nvme1n1" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd10", 00:12:26.769 "bdev_name": "nvme1n2" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd11", 00:12:26.769 "bdev_name": "nvme1n3" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd12", 00:12:26.769 "bdev_name": "nvme2n1" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd13", 00:12:26.769 "bdev_name": "nvme3n1" 00:12:26.769 } 00:12:26.769 ]' 00:12:26.769 20:05:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd0", 00:12:26.769 "bdev_name": "nvme0n1" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd1", 00:12:26.769 "bdev_name": "nvme1n1" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd10", 00:12:26.769 "bdev_name": "nvme1n2" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd11", 00:12:26.769 "bdev_name": "nvme1n3" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd12", 00:12:26.769 "bdev_name": "nvme2n1" 00:12:26.769 }, 00:12:26.769 { 00:12:26.769 "nbd_device": "/dev/nbd13", 00:12:26.769 "bdev_name": "nvme3n1" 00:12:26.769 } 00:12:26.769 ]' 00:12:26.769 20:05:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:26.769 20:05:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:26.769 /dev/nbd1 00:12:26.769 /dev/nbd10 00:12:26.769 /dev/nbd11 00:12:26.769 /dev/nbd12 00:12:26.769 /dev/nbd13' 00:12:26.769 20:05:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:26.769 /dev/nbd1 00:12:26.769 /dev/nbd10 00:12:26.769 /dev/nbd11 00:12:26.769 /dev/nbd12 00:12:26.769 /dev/nbd13' 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@65 -- # count=6 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@95 -- # count=6 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:26.770 256+0 records in 00:12:26.770 256+0 records out 00:12:26.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00888493 s, 118 MB/s 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:26.770 256+0 records in 00:12:26.770 256+0 records out 00:12:26.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0570365 s, 18.4 MB/s 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:26.770 256+0 records in 00:12:26.770 256+0 records out 00:12:26.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0546784 s, 19.2 MB/s 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.770 20:05:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:27.030 256+0 records in 00:12:27.030 256+0 records out 00:12:27.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0535959 s, 19.6 MB/s 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:27.030 256+0 records in 00:12:27.030 256+0 records out 00:12:27.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0531996 s, 19.7 MB/s 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:27.030 256+0 records in 00:12:27.030 256+0 records out 00:12:27.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0685222 s, 15.3 MB/s 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:27.030 256+0 records in 00:12:27.030 256+0 records out 00:12:27.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0535867 s, 19.6 MB/s 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@51 -- # local i 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.030 20:05:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.291 20:05:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@41 -- # break 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.292 20:05:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@41 -- # break 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.555 20:05:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@41 -- # break 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.817 20:05:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:28.077 20:05:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:28.077 20:05:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:28.077 20:05:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:28.077 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.078 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.078 20:05:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:28.078 20:05:35 -- bdev/nbd_common.sh@41 -- # break 00:12:28.078 20:05:35 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.078 20:05:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.078 20:05:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@41 -- # break 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@41 -- # break 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.339 20:05:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@65 -- # true 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@65 -- # count=0 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@104 -- # count=0 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@109 -- # return 0 00:12:28.600 20:05:36 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:28.600 20:05:36 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:28.862 malloc_lvol_verify 00:12:28.862 20:05:36 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:28.862 a739976e-7549-40e1-80c1-7ecbd8e17a24 00:12:28.862 20:05:36 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:29.122 e55cc999-9563-43ca-86a8-d2eb7300b18d 00:12:29.122 20:05:36 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:29.383 /dev/nbd0 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:29.383 mke2fs 1.47.0 (5-Feb-2023) 00:12:29.383 Discarding device blocks: 0/4096 done 00:12:29.383 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:29.383 00:12:29.383 Allocating group tables: 0/1 done 00:12:29.383 Writing inode tables: 0/1 done 00:12:29.383 Creating journal (1024 blocks): done 00:12:29.383 Writing superblocks and filesystem accounting information: 0/1 done 00:12:29.383 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@51 -- # local i 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.383 20:05:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@41 -- # break 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:29.644 20:05:37 -- bdev/nbd_common.sh@147 -- # return 0 00:12:29.644 20:05:37 -- bdev/blockdev.sh@324 -- # killprocess 67765 00:12:29.644 20:05:37 -- common/autotest_common.sh@936 -- # '[' -z 67765 ']' 00:12:29.644 20:05:37 -- common/autotest_common.sh@940 -- # kill -0 67765 00:12:29.644 20:05:37 -- common/autotest_common.sh@941 -- # uname 00:12:29.644 20:05:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.644 20:05:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67765 00:12:29.644 killing process with pid 67765 00:12:29.644 20:05:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.644 20:05:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.644 20:05:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67765' 00:12:29.644 20:05:37 -- common/autotest_common.sh@955 -- # kill 67765 00:12:29.644 20:05:37 -- common/autotest_common.sh@960 -- # wait 67765 00:12:30.588 20:05:37 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:30.588 00:12:30.588 real 0m8.890s 00:12:30.588 user 0m12.405s 00:12:30.588 sys 0m2.960s 00:12:30.588 20:05:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.588 ************************************ 00:12:30.588 END TEST bdev_nbd 00:12:30.588 ************************************ 00:12:30.588 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 20:05:37 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:30.588 20:05:37 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:30.588 20:05:37 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:30.588 20:05:37 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.588 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 ************************************ 00:12:30.588 START TEST bdev_fio 00:12:30.588 ************************************ 00:12:30.588 20:05:37 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:30.588 20:05:37 -- bdev/blockdev.sh@329 -- # local env_context 00:12:30.588 20:05:37 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:30.588 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:30.588 20:05:37 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:30.588 20:05:37 -- bdev/blockdev.sh@337 -- # echo '' 00:12:30.588 20:05:37 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:30.588 20:05:37 -- bdev/blockdev.sh@337 -- # env_context= 00:12:30.588 20:05:37 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.588 20:05:37 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:30.588 20:05:37 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:30.588 20:05:37 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:30.588 20:05:37 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:30.588 20:05:37 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.588 20:05:37 -- common/autotest_common.sh@1290 -- # cat 00:12:30.588 20:05:37 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1303 -- # cat 00:12:30.588 20:05:37 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:30.588 20:05:37 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:30.588 20:05:38 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:30.588 20:05:38 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:30.588 20:05:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.588 20:05:38 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:30.588 20:05:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.588 20:05:38 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:30.588 20:05:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.588 20:05:38 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:30.588 20:05:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.588 20:05:38 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:30.588 20:05:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.588 20:05:38 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:30.588 20:05:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.588 20:05:38 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:30.588 20:05:38 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:30.588 20:05:38 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.588 20:05:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:30.588 20:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.588 20:05:38 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 ************************************ 00:12:30.588 START TEST bdev_fio_rw_verify 00:12:30.588 ************************************ 00:12:30.588 20:05:38 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.588 20:05:38 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.588 20:05:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:30.588 20:05:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:30.588 20:05:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:30.588 20:05:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.588 20:05:38 -- common/autotest_common.sh@1330 -- # shift 00:12:30.588 20:05:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:30.588 20:05:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:30.588 20:05:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.588 20:05:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:30.588 20:05:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:30.588 20:05:38 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:30.588 20:05:38 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:30.588 20:05:38 -- common/autotest_common.sh@1336 -- # break 00:12:30.588 20:05:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:30.588 20:05:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.588 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.588 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.588 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.588 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.588 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.588 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.588 fio-3.35 00:12:30.588 Starting 6 threads 00:12:42.825 00:12:42.825 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68151: Mon Dec 16 20:05:48 2024 00:12:42.825 read: IOPS=16.5k, BW=64.6MiB/s (67.8MB/s)(646MiB/10002msec) 00:12:42.825 slat (usec): min=2, max=2313, avg= 5.71, stdev=17.17 00:12:42.825 clat (usec): min=82, max=8412, avg=1178.42, stdev=855.21 00:12:42.825 lat (usec): min=85, max=8429, avg=1184.14, stdev=855.99 00:12:42.825 clat percentiles (usec): 00:12:42.825 | 50.000th=[ 947], 99.000th=[ 3884], 99.900th=[ 5342], 99.990th=[ 6652], 00:12:42.825 | 99.999th=[ 8455] 00:12:42.825 write: IOPS=16.9k, BW=65.8MiB/s (69.0MB/s)(658MiB/10002msec); 0 zone resets 00:12:42.825 slat (usec): min=5, max=5091, avg=38.17, stdev=143.03 00:12:42.825 clat (usec): min=80, max=8120, avg=1389.88, stdev=959.10 00:12:42.825 lat (usec): min=94, max=8688, avg=1428.06, stdev=975.83 00:12:42.825 clat percentiles (usec): 00:12:42.825 | 50.000th=[ 1139], 99.000th=[ 4490], 99.900th=[ 6063], 99.990th=[ 7570], 00:12:42.825 | 99.999th=[ 8094] 00:12:42.825 bw ( KiB/s): min=43629, max=146697, per=100.00%, avg=68115.68, stdev=5143.67, samples=114 00:12:42.825 iops : min=10905, max=36673, avg=17027.47, stdev=1285.97, samples=114 00:12:42.825 lat (usec) : 100=0.02%, 250=4.34%, 500=14.95%, 750=16.88%, 1000=12.01% 00:12:42.825 lat (msec) : 2=32.35%, 4=18.15%, 10=1.29% 00:12:42.825 cpu : usr=44.34%, sys=31.67%, ctx=6548, majf=0, minf=17899 00:12:42.825 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:42.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.825 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.825 issued rwts: total=165499,168571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.825 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:42.825 00:12:42.825 Run status group 0 (all jobs): 00:12:42.825 READ: bw=64.6MiB/s (67.8MB/s), 64.6MiB/s-64.6MiB/s (67.8MB/s-67.8MB/s), io=646MiB (678MB), run=10002-10002msec 00:12:42.825 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=658MiB (690MB), run=10002-10002msec 00:12:42.825 ----------------------------------------------------- 00:12:42.825 Suppressions used: 00:12:42.825 count bytes template 00:12:42.825 6 48 /usr/src/fio/parse.c 00:12:42.825 2977 285792 /usr/src/fio/iolog.c 00:12:42.825 1 8 libtcmalloc_minimal.so 00:12:42.825 1 904 libcrypto.so 00:12:42.825 ----------------------------------------------------- 00:12:42.825 00:12:42.825 00:12:42.825 real 0m11.862s 00:12:42.825 user 0m28.107s 00:12:42.825 sys 0m19.356s 00:12:42.825 20:05:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.825 20:05:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.825 ************************************ 00:12:42.825 END TEST bdev_fio_rw_verify 00:12:42.825 ************************************ 00:12:42.825 20:05:49 -- bdev/blockdev.sh@348 -- # rm -f 00:12:42.825 20:05:49 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.825 20:05:49 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:42.825 20:05:49 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.825 20:05:49 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:42.825 20:05:49 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:42.825 20:05:49 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:42.825 20:05:49 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:42.825 20:05:49 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:42.825 20:05:49 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:42.825 20:05:49 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:42.825 20:05:49 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.825 20:05:49 -- common/autotest_common.sh@1290 -- # cat 00:12:42.825 20:05:49 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:42.825 20:05:49 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:42.825 20:05:49 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:42.825 20:05:49 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:42.826 20:05:49 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8b6e77a4-1707-41e2-a0c0-2364fd1b1129"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8b6e77a4-1707-41e2-a0c0-2364fd1b1129",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d61e70b6-eeb1-4e18-a176-2d64ec31249b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d61e70b6-eeb1-4e18-a176-2d64ec31249b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "e213af7f-5fc4-4201-8ee5-13c15db247b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e213af7f-5fc4-4201-8ee5-13c15db247b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "62a59a6c-cc26-4364-8c68-f8a855c983b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "62a59a6c-cc26-4364-8c68-f8a855c983b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "316c982f-5da7-4edc-9b83-fba0bbac543d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "316c982f-5da7-4edc-9b83-fba0bbac543d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "93abceef-f43c-40c5-8e19-a10b2c644b0e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "93abceef-f43c-40c5-8e19-a10b2c644b0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:42.826 20:05:50 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:42.826 20:05:50 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.826 /home/vagrant/spdk_repo/spdk 00:12:42.826 20:05:50 -- bdev/blockdev.sh@360 -- # popd 00:12:42.826 20:05:50 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:42.826 20:05:50 -- bdev/blockdev.sh@362 -- # return 0 00:12:42.826 00:12:42.826 real 0m12.052s 00:12:42.826 user 0m28.184s 00:12:42.826 sys 0m19.438s 00:12:42.826 20:05:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.826 20:05:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.826 ************************************ 00:12:42.826 END TEST bdev_fio 00:12:42.826 ************************************ 00:12:42.826 20:05:50 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:42.826 20:05:50 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:42.826 20:05:50 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:42.826 20:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.826 20:05:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.826 ************************************ 00:12:42.826 START TEST bdev_verify 00:12:42.826 ************************************ 00:12:42.826 20:05:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:42.826 [2024-12-16 20:05:50.164187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:42.826 [2024-12-16 20:05:50.164349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68331 ] 00:12:42.826 [2024-12-16 20:05:50.319456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:43.087 [2024-12-16 20:05:50.536957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.087 [2024-12-16 20:05:50.537035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.349 Running I/O for 5 seconds... 00:12:48.645 00:12:48.645 Latency(us) 00:12:48.645 [2024-12-16T20:05:56.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x0 length 0x20000 00:12:48.645 nvme0n1 : 5.08 2049.37 8.01 0.00 0.00 62201.58 15022.87 81466.29 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x20000 length 0x20000 00:12:48.645 nvme0n1 : 5.08 2007.57 7.84 0.00 0.00 63581.72 13611.32 81466.29 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x0 length 0x80000 00:12:48.645 nvme1n1 : 5.08 1998.32 7.81 0.00 0.00 63740.55 14216.27 81869.59 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x80000 length 0x80000 00:12:48.645 nvme1n1 : 5.09 1870.89 7.31 0.00 0.00 68071.28 10485.76 94371.84 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x0 length 0x80000 00:12:48.645 nvme1n2 : 5.07 1893.98 7.40 0.00 0.00 67243.99 18551.73 78239.90 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x80000 length 0x80000 00:12:48.645 nvme1n2 : 5.10 1914.11 7.48 0.00 0.00 66396.62 11796.48 84692.68 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x0 length 0x80000 00:12:48.645 nvme1n3 : 5.06 1942.44 7.59 0.00 0.00 65594.71 5066.44 82272.89 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x80000 length 0x80000 00:12:48.645 nvme1n3 : 5.09 1890.38 7.38 0.00 0.00 67206.43 7561.85 102841.11 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x0 length 0xbd0bd 00:12:48.645 nvme2n1 : 5.07 1966.26 7.68 0.00 0.00 64718.03 6604.01 82272.89 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:48.645 nvme2n1 : 5.10 1875.29 7.33 0.00 0.00 67591.73 8670.92 89935.56 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0x0 length 0xa0000 00:12:48.645 nvme3n1 : 5.08 1972.40 7.70 0.00 0.00 64378.97 7813.91 79046.50 00:12:48.645 [2024-12-16T20:05:56.285Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.645 Verification LBA range: start 0xa0000 length 0xa0000 00:12:48.645 nvme3n1 : 5.10 2030.11 7.93 0.00 0.00 62260.69 6276.33 79046.50 00:12:48.645 [2024-12-16T20:05:56.285Z] =================================================================================================================== 00:12:48.645 [2024-12-16T20:05:56.285Z] Total : 23411.12 91.45 0.00 0.00 65189.60 5066.44 102841.11 00:12:49.589 00:12:49.589 real 0m6.894s 00:12:49.589 user 0m8.624s 00:12:49.589 sys 0m3.112s 00:12:49.589 ************************************ 00:12:49.589 END TEST bdev_verify 00:12:49.589 ************************************ 00:12:49.589 20:05:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.589 20:05:56 -- common/autotest_common.sh@10 -- # set +x 00:12:49.589 20:05:57 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.589 20:05:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:49.589 20:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.589 20:05:57 -- common/autotest_common.sh@10 -- # set +x 00:12:49.589 ************************************ 00:12:49.589 START TEST bdev_verify_big_io 00:12:49.589 ************************************ 00:12:49.589 20:05:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.589 [2024-12-16 20:05:57.119578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:49.589 [2024-12-16 20:05:57.119718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68438 ] 00:12:49.849 [2024-12-16 20:05:57.273234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.109 [2024-12-16 20:05:57.493644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.109 [2024-12-16 20:05:57.493734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.369 Running I/O for 5 seconds... 00:12:56.962 00:12:56.962 Latency(us) 00:12:56.962 [2024-12-16T20:06:04.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x0 length 0x2000 00:12:56.962 nvme0n1 : 5.39 274.25 17.14 0.00 0.00 452191.44 107277.39 535580.36 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x2000 length 0x2000 00:12:56.962 nvme0n1 : 5.39 240.63 15.04 0.00 0.00 519353.27 116149.96 622692.82 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x0 length 0x8000 00:12:56.962 nvme1n1 : 5.39 213.64 13.35 0.00 0.00 573484.46 179871.11 551712.30 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x8000 length 0x8000 00:12:56.962 nvme1n1 : 5.47 254.42 15.90 0.00 0.00 484916.97 73400.32 587202.56 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x0 length 0x8000 00:12:56.962 nvme1n2 : 5.47 271.53 16.97 0.00 0.00 450363.17 51622.20 529127.58 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x8000 length 0x8000 00:12:56.962 nvme1n2 : 5.47 222.60 13.91 0.00 0.00 542324.02 45169.43 738842.78 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x0 length 0x8000 00:12:56.962 nvme1n3 : 5.47 286.61 17.91 0.00 0.00 416866.42 10687.41 464599.83 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x8000 length 0x8000 00:12:56.962 nvme1n3 : 5.50 301.48 18.84 0.00 0.00 395856.64 51622.20 564617.85 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x0 length 0xbd0b 00:12:56.962 nvme2n1 : 5.46 349.53 21.85 0.00 0.00 337996.12 57671.68 390392.91 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:56.962 nvme2n1 : 5.53 359.26 22.45 0.00 0.00 326960.01 38313.35 558165.07 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0x0 length 0xa000 00:12:56.962 nvme3n1 : 5.48 302.70 18.92 0.00 0.00 387520.91 4965.61 564617.85 00:12:56.962 [2024-12-16T20:06:04.602Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.962 Verification LBA range: start 0xa000 length 0xa000 00:12:56.962 nvme3n1 : 5.53 330.17 20.64 0.00 0.00 349730.86 1550.18 596881.72 00:12:56.962 [2024-12-16T20:06:04.602Z] =================================================================================================================== 00:12:56.962 [2024-12-16T20:06:04.602Z] Total : 3406.82 212.93 0.00 0.00 423712.28 1550.18 738842.78 00:12:57.222 ************************************ 00:12:57.222 00:12:57.222 real 0m7.557s 00:12:57.222 user 0m13.300s 00:12:57.222 sys 0m0.690s 00:12:57.222 20:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.222 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:57.222 END TEST bdev_verify_big_io 00:12:57.222 ************************************ 00:12:57.222 20:06:04 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.222 20:06:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:57.222 20:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.222 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:12:57.222 ************************************ 00:12:57.222 START TEST bdev_write_zeroes 00:12:57.222 ************************************ 00:12:57.222 20:06:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.222 [2024-12-16 20:06:04.730625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:57.222 [2024-12-16 20:06:04.730727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68542 ] 00:12:57.522 [2024-12-16 20:06:04.880164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.522 [2024-12-16 20:06:05.052262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.780 Running I/O for 1 seconds... 00:12:59.163 00:12:59.163 Latency(us) 00:12:59.163 [2024-12-16T20:06:06.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.163 [2024-12-16T20:06:06.803Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.163 nvme0n1 : 1.01 11785.68 46.04 0.00 0.00 10850.93 7057.72 19459.15 00:12:59.163 [2024-12-16T20:06:06.803Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.163 nvme1n1 : 1.01 11771.85 45.98 0.00 0.00 10855.37 7410.61 18652.55 00:12:59.163 [2024-12-16T20:06:06.803Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.163 nvme1n2 : 1.01 11758.38 45.93 0.00 0.00 10860.53 7713.08 17946.78 00:12:59.163 [2024-12-16T20:06:06.803Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.163 nvme1n3 : 1.01 11745.05 45.88 0.00 0.00 10863.78 7713.08 18551.73 00:12:59.163 [2024-12-16T20:06:06.803Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.163 nvme2n1 : 1.01 13595.50 53.11 0.00 0.00 9337.72 2886.10 17745.13 00:12:59.163 [2024-12-16T20:06:06.803Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.163 nvme3n1 : 1.02 11935.12 46.62 0.00 0.00 10612.43 2722.26 20366.57 00:12:59.163 [2024-12-16T20:06:06.803Z] =================================================================================================================== 00:12:59.163 [2024-12-16T20:06:06.803Z] Total : 72591.59 283.56 0.00 0.00 10532.17 2722.26 20366.57 00:12:59.735 00:12:59.735 real 0m2.607s 00:12:59.735 user 0m1.968s 00:12:59.735 sys 0m0.464s 00:12:59.735 20:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.735 ************************************ 00:12:59.735 END TEST bdev_write_zeroes 00:12:59.735 ************************************ 00:12:59.735 20:06:07 -- common/autotest_common.sh@10 -- # set +x 00:12:59.735 20:06:07 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.735 20:06:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:59.735 20:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.735 20:06:07 -- common/autotest_common.sh@10 -- # set +x 00:12:59.735 ************************************ 00:12:59.735 START TEST bdev_json_nonenclosed 00:12:59.735 ************************************ 00:12:59.735 20:06:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.997 [2024-12-16 20:06:07.420815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.997 [2024-12-16 20:06:07.421019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68596 ] 00:12:59.997 [2024-12-16 20:06:07.575527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.259 [2024-12-16 20:06:07.794810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.259 [2024-12-16 20:06:07.794986] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.259 [2024-12-16 20:06:07.795005] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.520 00:13:00.520 real 0m0.750s 00:13:00.520 user 0m0.512s 00:13:00.520 sys 0m0.129s 00:13:00.520 ************************************ 00:13:00.520 END TEST bdev_json_nonenclosed 00:13:00.520 ************************************ 00:13:00.520 20:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.520 20:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:00.520 20:06:08 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.520 20:06:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:00.520 20:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.520 20:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:00.781 ************************************ 00:13:00.781 START TEST bdev_json_nonarray 00:13:00.781 ************************************ 00:13:00.781 20:06:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.781 [2024-12-16 20:06:08.235338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.781 [2024-12-16 20:06:08.235489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68623 ] 00:13:00.781 [2024-12-16 20:06:08.390978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.042 [2024-12-16 20:06:08.618231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.042 [2024-12-16 20:06:08.618446] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:01.042 [2024-12-16 20:06:08.618468] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.303 00:13:01.303 real 0m0.741s 00:13:01.303 user 0m0.519s 00:13:01.303 sys 0m0.114s 00:13:01.303 20:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.303 20:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:01.303 ************************************ 00:13:01.303 END TEST bdev_json_nonarray 00:13:01.303 ************************************ 00:13:01.563 20:06:08 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:01.563 20:06:08 -- bdev/blockdev.sh@809 -- # cleanup 00:13:01.563 20:06:08 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:01.563 20:06:08 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:01.563 20:06:08 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:01.563 20:06:08 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:02.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:04.423 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.365 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.365 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.365 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.365 00:13:05.365 real 0m55.207s 00:13:05.365 user 1m19.925s 00:13:05.365 sys 0m32.743s 00:13:05.365 ************************************ 00:13:05.365 END TEST blockdev_xnvme 00:13:05.365 ************************************ 00:13:05.365 20:06:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:05.365 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 20:06:12 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:05.365 20:06:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:05.365 20:06:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.365 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 ************************************ 00:13:05.365 START TEST ublk 00:13:05.365 ************************************ 00:13:05.365 20:06:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:05.626 * Looking for test storage... 00:13:05.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:05.626 20:06:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:05.626 20:06:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:05.626 20:06:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:05.626 20:06:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:05.626 20:06:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:05.626 20:06:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:05.626 20:06:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:05.626 20:06:13 -- scripts/common.sh@335 -- # IFS=.-: 00:13:05.626 20:06:13 -- scripts/common.sh@335 -- # read -ra ver1 00:13:05.626 20:06:13 -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.626 20:06:13 -- scripts/common.sh@336 -- # read -ra ver2 00:13:05.626 20:06:13 -- scripts/common.sh@337 -- # local 'op=<' 00:13:05.626 20:06:13 -- scripts/common.sh@339 -- # ver1_l=2 00:13:05.626 20:06:13 -- scripts/common.sh@340 -- # ver2_l=1 00:13:05.626 20:06:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:05.626 20:06:13 -- scripts/common.sh@343 -- # case "$op" in 00:13:05.626 20:06:13 -- scripts/common.sh@344 -- # : 1 00:13:05.626 20:06:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:05.626 20:06:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.626 20:06:13 -- scripts/common.sh@364 -- # decimal 1 00:13:05.626 20:06:13 -- scripts/common.sh@352 -- # local d=1 00:13:05.626 20:06:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.626 20:06:13 -- scripts/common.sh@354 -- # echo 1 00:13:05.626 20:06:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:05.626 20:06:13 -- scripts/common.sh@365 -- # decimal 2 00:13:05.626 20:06:13 -- scripts/common.sh@352 -- # local d=2 00:13:05.626 20:06:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.626 20:06:13 -- scripts/common.sh@354 -- # echo 2 00:13:05.626 20:06:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:05.626 20:06:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:05.626 20:06:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:05.626 20:06:13 -- scripts/common.sh@367 -- # return 0 00:13:05.627 20:06:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.627 20:06:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:05.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.627 --rc genhtml_branch_coverage=1 00:13:05.627 --rc genhtml_function_coverage=1 00:13:05.627 --rc genhtml_legend=1 00:13:05.627 --rc geninfo_all_blocks=1 00:13:05.627 --rc geninfo_unexecuted_blocks=1 00:13:05.627 00:13:05.627 ' 00:13:05.627 20:06:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:05.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.627 --rc genhtml_branch_coverage=1 00:13:05.627 --rc genhtml_function_coverage=1 00:13:05.627 --rc genhtml_legend=1 00:13:05.627 --rc geninfo_all_blocks=1 00:13:05.627 --rc geninfo_unexecuted_blocks=1 00:13:05.627 00:13:05.627 ' 00:13:05.627 20:06:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:05.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.627 --rc genhtml_branch_coverage=1 00:13:05.627 --rc genhtml_function_coverage=1 00:13:05.627 --rc genhtml_legend=1 00:13:05.627 --rc geninfo_all_blocks=1 00:13:05.627 --rc geninfo_unexecuted_blocks=1 00:13:05.627 00:13:05.627 ' 00:13:05.627 20:06:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:05.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.627 --rc genhtml_branch_coverage=1 00:13:05.627 --rc genhtml_function_coverage=1 00:13:05.627 --rc genhtml_legend=1 00:13:05.627 --rc geninfo_all_blocks=1 00:13:05.627 --rc geninfo_unexecuted_blocks=1 00:13:05.627 00:13:05.627 ' 00:13:05.627 20:06:13 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:05.627 20:06:13 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:05.627 20:06:13 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:05.627 20:06:13 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:05.627 20:06:13 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:05.627 20:06:13 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:05.627 20:06:13 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:05.627 20:06:13 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:05.627 20:06:13 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:05.627 20:06:13 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:05.627 20:06:13 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:05.627 20:06:13 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:05.627 20:06:13 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:05.627 20:06:13 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:05.627 20:06:13 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:05.627 20:06:13 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:05.627 20:06:13 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:05.627 20:06:13 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:05.627 20:06:13 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:05.627 20:06:13 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:05.627 20:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:05.627 20:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.627 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:05.627 ************************************ 00:13:05.627 START TEST test_save_ublk_config 00:13:05.627 ************************************ 00:13:05.627 20:06:13 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:05.627 20:06:13 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:05.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.627 20:06:13 -- ublk/ublk.sh@103 -- # tgtpid=68932 00:13:05.627 20:06:13 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:05.627 20:06:13 -- ublk/ublk.sh@106 -- # waitforlisten 68932 00:13:05.627 20:06:13 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:05.627 20:06:13 -- common/autotest_common.sh@829 -- # '[' -z 68932 ']' 00:13:05.627 20:06:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.627 20:06:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.627 20:06:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.627 20:06:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.627 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:05.627 [2024-12-16 20:06:13.211061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:05.627 [2024-12-16 20:06:13.211206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68932 ] 00:13:05.888 [2024-12-16 20:06:13.364546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.149 [2024-12-16 20:06:13.606031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.149 [2024-12-16 20:06:13.606281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.536 20:06:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.536 20:06:14 -- common/autotest_common.sh@862 -- # return 0 00:13:07.536 20:06:14 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:07.536 20:06:14 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:07.536 20:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.536 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:07.536 [2024-12-16 20:06:14.752169] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:07.536 malloc0 00:13:07.536 [2024-12-16 20:06:14.823478] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:07.536 [2024-12-16 20:06:14.823580] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:07.536 [2024-12-16 20:06:14.823589] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:07.536 [2024-12-16 20:06:14.823599] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:07.536 [2024-12-16 20:06:14.831376] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:07.536 [2024-12-16 20:06:14.831414] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:07.536 [2024-12-16 20:06:14.839353] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:07.536 [2024-12-16 20:06:14.839485] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:07.536 [2024-12-16 20:06:14.856348] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:07.536 0 00:13:07.536 20:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.536 20:06:14 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:07.536 20:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.536 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:07.536 20:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.536 20:06:15 -- ublk/ublk.sh@115 -- # config='{ 00:13:07.536 "subsystems": [ 00:13:07.536 { 00:13:07.536 "subsystem": "iobuf", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "iobuf_set_options", 00:13:07.536 "params": { 00:13:07.536 "small_pool_count": 8192, 00:13:07.536 "large_pool_count": 1024, 00:13:07.536 "small_bufsize": 8192, 00:13:07.536 "large_bufsize": 135168 00:13:07.536 } 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "sock", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "sock_impl_set_options", 00:13:07.536 "params": { 00:13:07.536 "impl_name": "posix", 00:13:07.536 "recv_buf_size": 2097152, 00:13:07.536 "send_buf_size": 2097152, 00:13:07.536 "enable_recv_pipe": true, 00:13:07.536 "enable_quickack": false, 00:13:07.536 "enable_placement_id": 0, 00:13:07.536 "enable_zerocopy_send_server": true, 00:13:07.536 "enable_zerocopy_send_client": false, 00:13:07.536 "zerocopy_threshold": 0, 00:13:07.536 "tls_version": 0, 00:13:07.536 "enable_ktls": false 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "sock_impl_set_options", 00:13:07.536 "params": { 00:13:07.536 "impl_name": "ssl", 00:13:07.536 "recv_buf_size": 4096, 00:13:07.536 "send_buf_size": 4096, 00:13:07.536 "enable_recv_pipe": true, 00:13:07.536 "enable_quickack": false, 00:13:07.536 "enable_placement_id": 0, 00:13:07.536 "enable_zerocopy_send_server": true, 00:13:07.536 "enable_zerocopy_send_client": false, 00:13:07.536 "zerocopy_threshold": 0, 00:13:07.536 "tls_version": 0, 00:13:07.536 "enable_ktls": false 00:13:07.536 } 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "vmd", 00:13:07.536 "config": [] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "accel", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "accel_set_options", 00:13:07.536 "params": { 00:13:07.536 "small_cache_size": 128, 00:13:07.536 "large_cache_size": 16, 00:13:07.536 "task_count": 2048, 00:13:07.536 "sequence_count": 2048, 00:13:07.536 "buf_count": 2048 00:13:07.536 } 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "bdev", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "bdev_set_options", 00:13:07.536 "params": { 00:13:07.536 "bdev_io_pool_size": 65535, 00:13:07.536 "bdev_io_cache_size": 256, 00:13:07.536 "bdev_auto_examine": true, 00:13:07.536 "iobuf_small_cache_size": 128, 00:13:07.536 "iobuf_large_cache_size": 16 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "bdev_raid_set_options", 00:13:07.536 "params": { 00:13:07.536 "process_window_size_kb": 1024 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "bdev_iscsi_set_options", 00:13:07.536 "params": { 00:13:07.536 "timeout_sec": 30 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "bdev_nvme_set_options", 00:13:07.536 "params": { 00:13:07.536 "action_on_timeout": "none", 00:13:07.536 "timeout_us": 0, 00:13:07.536 "timeout_admin_us": 0, 00:13:07.536 "keep_alive_timeout_ms": 10000, 00:13:07.536 "transport_retry_count": 4, 00:13:07.536 "arbitration_burst": 0, 00:13:07.536 "low_priority_weight": 0, 00:13:07.536 "medium_priority_weight": 0, 00:13:07.536 "high_priority_weight": 0, 00:13:07.536 "nvme_adminq_poll_period_us": 10000, 00:13:07.536 "nvme_ioq_poll_period_us": 0, 00:13:07.536 "io_queue_requests": 0, 00:13:07.536 "delay_cmd_submit": true, 00:13:07.536 "bdev_retry_count": 3, 00:13:07.536 "transport_ack_timeout": 0, 00:13:07.536 "ctrlr_loss_timeout_sec": 0, 00:13:07.536 "reconnect_delay_sec": 0, 00:13:07.536 "fast_io_fail_timeout_sec": 0, 00:13:07.536 "generate_uuids": false, 00:13:07.536 "transport_tos": 0, 00:13:07.536 "io_path_stat": false, 00:13:07.536 "allow_accel_sequence": false 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "bdev_nvme_set_hotplug", 00:13:07.536 "params": { 00:13:07.536 "period_us": 100000, 00:13:07.536 "enable": false 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "bdev_malloc_create", 00:13:07.536 "params": { 00:13:07.536 "name": "malloc0", 00:13:07.536 "num_blocks": 8192, 00:13:07.536 "block_size": 4096, 00:13:07.536 "physical_block_size": 4096, 00:13:07.536 "uuid": "46e10422-05cb-4e37-8b99-ee79ab930bc5", 00:13:07.536 "optimal_io_boundary": 0 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "bdev_wait_for_examine" 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "scsi", 00:13:07.536 "config": null 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "scheduler", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "framework_set_scheduler", 00:13:07.536 "params": { 00:13:07.536 "name": "static" 00:13:07.536 } 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "vhost_scsi", 00:13:07.536 "config": [] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "vhost_blk", 00:13:07.536 "config": [] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "ublk", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "ublk_create_target", 00:13:07.536 "params": { 00:13:07.536 "cpumask": "1" 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "ublk_start_disk", 00:13:07.536 "params": { 00:13:07.536 "bdev_name": "malloc0", 00:13:07.536 "ublk_id": 0, 00:13:07.536 "num_queues": 1, 00:13:07.536 "queue_depth": 128 00:13:07.536 } 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "nbd", 00:13:07.536 "config": [] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "nvmf", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "nvmf_set_config", 00:13:07.536 "params": { 00:13:07.536 "discovery_filter": "match_any", 00:13:07.536 "admin_cmd_passthru": { 00:13:07.536 "identify_ctrlr": false 00:13:07.536 } 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "nvmf_set_max_subsystems", 00:13:07.536 "params": { 00:13:07.536 "max_subsystems": 1024 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "method": "nvmf_set_crdt", 00:13:07.536 "params": { 00:13:07.536 "crdt1": 0, 00:13:07.536 "crdt2": 0, 00:13:07.536 "crdt3": 0 00:13:07.536 } 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "subsystem": "iscsi", 00:13:07.536 "config": [ 00:13:07.536 { 00:13:07.536 "method": "iscsi_set_options", 00:13:07.536 "params": { 00:13:07.536 "node_base": "iqn.2016-06.io.spdk", 00:13:07.536 "max_sessions": 128, 00:13:07.536 "max_connections_per_session": 2, 00:13:07.536 "max_queue_depth": 64, 00:13:07.536 "default_time2wait": 2, 00:13:07.536 "default_time2retain": 20, 00:13:07.536 "first_burst_length": 8192, 00:13:07.537 "immediate_data": true, 00:13:07.537 "allow_duplicated_isid": false, 00:13:07.537 "error_recovery_level": 0, 00:13:07.537 "nop_timeout": 60, 00:13:07.537 "nop_in_interval": 30, 00:13:07.537 "disable_chap": false, 00:13:07.537 "require_chap": false, 00:13:07.537 "mutual_chap": false, 00:13:07.537 "chap_group": 0, 00:13:07.537 "max_large_datain_per_connection": 64, 00:13:07.537 "max_r2t_per_connection": 4, 00:13:07.537 "pdu_pool_size": 36864, 00:13:07.537 "immediate_data_pool_size": 16384, 00:13:07.537 "data_out_pool_size": 2048 00:13:07.537 } 00:13:07.537 } 00:13:07.537 ] 00:13:07.537 } 00:13:07.537 ] 00:13:07.537 }' 00:13:07.537 20:06:15 -- ublk/ublk.sh@116 -- # killprocess 68932 00:13:07.537 20:06:15 -- common/autotest_common.sh@936 -- # '[' -z 68932 ']' 00:13:07.537 20:06:15 -- common/autotest_common.sh@940 -- # kill -0 68932 00:13:07.537 20:06:15 -- common/autotest_common.sh@941 -- # uname 00:13:07.537 20:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:07.537 20:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68932 00:13:07.537 20:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:07.537 20:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:07.537 20:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68932' 00:13:07.537 killing process with pid 68932 00:13:07.537 20:06:15 -- common/autotest_common.sh@955 -- # kill 68932 00:13:07.537 20:06:15 -- common/autotest_common.sh@960 -- # wait 68932 00:13:08.922 [2024-12-16 20:06:16.264707] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:08.922 [2024-12-16 20:06:16.300355] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:08.922 [2024-12-16 20:06:16.300511] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:08.922 [2024-12-16 20:06:16.308337] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:08.922 [2024-12-16 20:06:16.308403] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:08.922 [2024-12-16 20:06:16.308419] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:08.922 [2024-12-16 20:06:16.308451] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:08.922 [2024-12-16 20:06:16.308606] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:10.306 20:06:17 -- ublk/ublk.sh@119 -- # tgtpid=68994 00:13:10.306 20:06:17 -- ublk/ublk.sh@121 -- # waitforlisten 68994 00:13:10.306 20:06:17 -- common/autotest_common.sh@829 -- # '[' -z 68994 ']' 00:13:10.306 20:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.306 20:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.306 20:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.306 20:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.306 20:06:17 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:10.306 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:13:10.306 20:06:17 -- ublk/ublk.sh@118 -- # echo '{ 00:13:10.306 "subsystems": [ 00:13:10.306 { 00:13:10.306 "subsystem": "iobuf", 00:13:10.306 "config": [ 00:13:10.306 { 00:13:10.306 "method": "iobuf_set_options", 00:13:10.306 "params": { 00:13:10.306 "small_pool_count": 8192, 00:13:10.306 "large_pool_count": 1024, 00:13:10.306 "small_bufsize": 8192, 00:13:10.306 "large_bufsize": 135168 00:13:10.306 } 00:13:10.306 } 00:13:10.306 ] 00:13:10.306 }, 00:13:10.306 { 00:13:10.306 "subsystem": "sock", 00:13:10.306 "config": [ 00:13:10.306 { 00:13:10.306 "method": "sock_impl_set_options", 00:13:10.306 "params": { 00:13:10.306 "impl_name": "posix", 00:13:10.306 "recv_buf_size": 2097152, 00:13:10.306 "send_buf_size": 2097152, 00:13:10.306 "enable_recv_pipe": true, 00:13:10.306 "enable_quickack": false, 00:13:10.306 "enable_placement_id": 0, 00:13:10.306 "enable_zerocopy_send_server": true, 00:13:10.306 "enable_zerocopy_send_client": false, 00:13:10.306 "zerocopy_threshold": 0, 00:13:10.306 "tls_version": 0, 00:13:10.306 "enable_ktls": false 00:13:10.306 } 00:13:10.306 }, 00:13:10.306 { 00:13:10.306 "method": "sock_impl_set_options", 00:13:10.306 "params": { 00:13:10.306 "impl_name": "ssl", 00:13:10.306 "recv_buf_size": 4096, 00:13:10.306 "send_buf_size": 4096, 00:13:10.306 "enable_recv_pipe": true, 00:13:10.306 "enable_quickack": false, 00:13:10.306 "enable_placement_id": 0, 00:13:10.306 "enable_zerocopy_send_server": true, 00:13:10.306 "enable_zerocopy_send_client": false, 00:13:10.306 "zerocopy_threshold": 0, 00:13:10.306 "tls_version": 0, 00:13:10.306 "enable_ktls": false 00:13:10.306 } 00:13:10.306 } 00:13:10.306 ] 00:13:10.306 }, 00:13:10.306 { 00:13:10.306 "subsystem": "vmd", 00:13:10.306 "config": [] 00:13:10.306 }, 00:13:10.306 { 00:13:10.306 "subsystem": "accel", 00:13:10.306 "config": [ 00:13:10.306 { 00:13:10.307 "method": "accel_set_options", 00:13:10.307 "params": { 00:13:10.307 "small_cache_size": 128, 00:13:10.307 "large_cache_size": 16, 00:13:10.307 "task_count": 2048, 00:13:10.307 "sequence_count": 2048, 00:13:10.307 "buf_count": 2048 00:13:10.307 } 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "bdev", 00:13:10.307 "config": [ 00:13:10.307 { 00:13:10.307 "method": "bdev_set_options", 00:13:10.307 "params": { 00:13:10.307 "bdev_io_pool_size": 65535, 00:13:10.307 "bdev_io_cache_size": 256, 00:13:10.307 "bdev_auto_examine": true, 00:13:10.307 "iobuf_small_cache_size": 128, 00:13:10.307 "iobuf_large_cache_size": 16 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "bdev_raid_set_options", 00:13:10.307 "params": { 00:13:10.307 "process_window_size_kb": 1024 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "bdev_iscsi_set_options", 00:13:10.307 "params": { 00:13:10.307 "timeout_sec": 30 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "bdev_nvme_set_options", 00:13:10.307 "params": { 00:13:10.307 "action_on_timeout": "none", 00:13:10.307 "timeout_us": 0, 00:13:10.307 "timeout_admin_us": 0, 00:13:10.307 "keep_alive_timeout_ms": 10000, 00:13:10.307 "transport_retry_count": 4, 00:13:10.307 "arbitration_burst": 0, 00:13:10.307 "low_priority_weight": 0, 00:13:10.307 "medium_priority_weight": 0, 00:13:10.307 "high_priority_weight": 0, 00:13:10.307 "nvme_adminq_poll_period_us": 10000, 00:13:10.307 "nvme_ioq_poll_period_us": 0, 00:13:10.307 "io_queue_requests": 0, 00:13:10.307 "delay_cmd_submit": true, 00:13:10.307 "bdev_retry_count": 3, 00:13:10.307 "transport_ack_timeout": 0, 00:13:10.307 "ctrlr_loss_timeout_sec": 0, 00:13:10.307 "reconnect_delay_sec": 0, 00:13:10.307 "fast_io_fail_timeout_sec": 0, 00:13:10.307 "generate_uuids": false, 00:13:10.307 "transport_tos": 0, 00:13:10.307 "io_path_stat": false, 00:13:10.307 "allow_accel_sequence": false 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "bdev_nvme_set_hotplug", 00:13:10.307 "params": { 00:13:10.307 "period_us": 100000, 00:13:10.307 "enable": false 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "bdev_malloc_create", 00:13:10.307 "params": { 00:13:10.307 "name": "malloc0", 00:13:10.307 "num_blocks": 8192, 00:13:10.307 "block_size": 4096, 00:13:10.307 "physical_block_size": 4096, 00:13:10.307 "uuid": "46e10422-05cb-4e37-8b99-ee79ab930bc5", 00:13:10.307 "optimal_io_boundary": 0 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "bdev_wait_for_examine" 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "scsi", 00:13:10.307 "config": null 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "scheduler", 00:13:10.307 "config": [ 00:13:10.307 { 00:13:10.307 "method": "framework_set_scheduler", 00:13:10.307 "params": { 00:13:10.307 "name": "static" 00:13:10.307 } 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "vhost_scsi", 00:13:10.307 "config": [] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "vhost_blk", 00:13:10.307 "config": [] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "ublk", 00:13:10.307 "config": [ 00:13:10.307 { 00:13:10.307 "method": "ublk_create_target", 00:13:10.307 "params": { 00:13:10.307 "cpumask": "1" 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "ublk_start_disk", 00:13:10.307 "params": { 00:13:10.307 "bdev_name": "malloc0", 00:13:10.307 "ublk_id": 0, 00:13:10.307 "num_queues": 1, 00:13:10.307 "queue_depth": 128 00:13:10.307 } 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "nbd", 00:13:10.307 "config": [] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "nvmf", 00:13:10.307 "config": [ 00:13:10.307 { 00:13:10.307 "method": "nvmf_set_config", 00:13:10.307 "params": { 00:13:10.307 "discovery_filter": "match_any", 00:13:10.307 "admin_cmd_passthru": { 00:13:10.307 "identify_ctrlr": false 00:13:10.307 } 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "nvmf_set_max_subsystems", 00:13:10.307 "params": { 00:13:10.307 "max_subsystems": 1024 00:13:10.307 } 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "method": "nvmf_set_crdt", 00:13:10.307 "params": { 00:13:10.307 "crdt1": 0, 00:13:10.307 "crdt2": 0, 00:13:10.307 "crdt3": 0 00:13:10.307 } 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 }, 00:13:10.307 { 00:13:10.307 "subsystem": "iscsi", 00:13:10.307 "config": [ 00:13:10.307 { 00:13:10.307 "method": "iscsi_set_options", 00:13:10.307 "params": { 00:13:10.307 "node_base": "iqn.2016-06.io.spdk", 00:13:10.307 "max_sessions": 128, 00:13:10.307 "max_connections_per_session": 2, 00:13:10.307 "max_queue_depth": 64, 00:13:10.307 "default_time2wait": 2, 00:13:10.307 "default_time2retain": 20, 00:13:10.307 "first_burst_length": 8192, 00:13:10.307 "immediate_data": true, 00:13:10.307 "allow_duplicated_isid": false, 00:13:10.307 "error_recovery_level": 0, 00:13:10.307 "nop_timeout": 60, 00:13:10.307 "nop_in_interval": 30, 00:13:10.307 "disable_chap": false, 00:13:10.307 "require_chap": false, 00:13:10.307 "mutual_chap": false, 00:13:10.307 "chap_group": 0, 00:13:10.307 "max_large_datain_per_connection": 64, 00:13:10.307 "max_r2t_per_connection": 4, 00:13:10.307 "pdu_pool_size": 36864, 00:13:10.307 "immediate_data_pool_size": 16384, 00:13:10.307 "data_out_pool_size": 2048 00:13:10.307 } 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 } 00:13:10.307 ] 00:13:10.307 }' 00:13:10.307 [2024-12-16 20:06:17.807976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:10.307 [2024-12-16 20:06:17.808092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68994 ] 00:13:10.568 [2024-12-16 20:06:17.953803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.568 [2024-12-16 20:06:18.110742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.568 [2024-12-16 20:06:18.110902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.141 [2024-12-16 20:06:18.697904] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:11.141 [2024-12-16 20:06:18.705405] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:11.141 [2024-12-16 20:06:18.705465] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:11.141 [2024-12-16 20:06:18.705471] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:11.141 [2024-12-16 20:06:18.705476] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:11.141 [2024-12-16 20:06:18.714365] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:11.141 [2024-12-16 20:06:18.714382] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:11.141 [2024-12-16 20:06:18.721318] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:11.141 [2024-12-16 20:06:18.721393] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:11.141 [2024-12-16 20:06:18.738315] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:11.712 20:06:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.712 20:06:19 -- common/autotest_common.sh@862 -- # return 0 00:13:11.712 20:06:19 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:11.712 20:06:19 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:11.712 20:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.712 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:11.712 20:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.712 20:06:19 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:11.712 20:06:19 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:11.712 20:06:19 -- ublk/ublk.sh@125 -- # killprocess 68994 00:13:11.712 20:06:19 -- common/autotest_common.sh@936 -- # '[' -z 68994 ']' 00:13:11.712 20:06:19 -- common/autotest_common.sh@940 -- # kill -0 68994 00:13:11.712 20:06:19 -- common/autotest_common.sh@941 -- # uname 00:13:11.712 20:06:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.712 20:06:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68994 00:13:11.973 20:06:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:11.973 killing process with pid 68994 00:13:11.973 20:06:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:11.973 20:06:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68994' 00:13:11.973 20:06:19 -- common/autotest_common.sh@955 -- # kill 68994 00:13:11.973 20:06:19 -- common/autotest_common.sh@960 -- # wait 68994 00:13:12.544 [2024-12-16 20:06:20.092507] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:12.544 [2024-12-16 20:06:20.121377] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:12.544 [2024-12-16 20:06:20.121479] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:12.544 [2024-12-16 20:06:20.130336] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:12.544 [2024-12-16 20:06:20.130377] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:12.544 [2024-12-16 20:06:20.130383] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:12.544 [2024-12-16 20:06:20.130401] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:12.544 [2024-12-16 20:06:20.130517] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:13.931 20:06:21 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:13.931 ************************************ 00:13:13.931 END TEST test_save_ublk_config 00:13:13.931 ************************************ 00:13:13.931 00:13:13.931 real 0m8.297s 00:13:13.931 user 0m6.039s 00:13:13.931 sys 0m3.213s 00:13:13.931 20:06:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.931 20:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:13.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.931 20:06:21 -- ublk/ublk.sh@139 -- # spdk_pid=69069 00:13:13.931 20:06:21 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:13.931 20:06:21 -- ublk/ublk.sh@141 -- # waitforlisten 69069 00:13:13.931 20:06:21 -- common/autotest_common.sh@829 -- # '[' -z 69069 ']' 00:13:13.931 20:06:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.931 20:06:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.931 20:06:21 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:13.931 20:06:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.931 20:06:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.931 20:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:13.931 [2024-12-16 20:06:21.522458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:13.931 [2024-12-16 20:06:21.522537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69069 ] 00:13:14.191 [2024-12-16 20:06:21.663883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.191 [2024-12-16 20:06:21.805020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.191 [2024-12-16 20:06:21.805441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.191 [2024-12-16 20:06:21.805595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.763 20:06:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.763 20:06:22 -- common/autotest_common.sh@862 -- # return 0 00:13:14.763 20:06:22 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:14.763 20:06:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:14.763 20:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.763 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:14.763 ************************************ 00:13:14.763 START TEST test_create_ublk 00:13:14.763 ************************************ 00:13:14.763 20:06:22 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:14.763 20:06:22 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:14.763 20:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.763 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:14.763 [2024-12-16 20:06:22.355812] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:14.763 20:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.763 20:06:22 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:14.763 20:06:22 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:14.763 20:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.763 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:15.024 20:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.024 20:06:22 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:15.024 20:06:22 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:15.024 20:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.024 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:15.024 [2024-12-16 20:06:22.514416] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:15.024 [2024-12-16 20:06:22.514713] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:15.024 [2024-12-16 20:06:22.514725] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:15.024 [2024-12-16 20:06:22.514732] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.024 [2024-12-16 20:06:22.523483] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.024 [2024-12-16 20:06:22.523504] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.024 [2024-12-16 20:06:22.530323] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.024 [2024-12-16 20:06:22.536471] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:15.024 [2024-12-16 20:06:22.559334] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:15.024 20:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.024 20:06:22 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:15.024 20:06:22 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:15.024 20:06:22 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:15.024 20:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.024 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:15.024 20:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.024 20:06:22 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:15.024 { 00:13:15.024 "ublk_device": "/dev/ublkb0", 00:13:15.024 "id": 0, 00:13:15.024 "queue_depth": 512, 00:13:15.024 "num_queues": 4, 00:13:15.024 "bdev_name": "Malloc0" 00:13:15.024 } 00:13:15.024 ]' 00:13:15.024 20:06:22 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:15.024 20:06:22 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:15.024 20:06:22 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:15.024 20:06:22 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:15.024 20:06:22 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:15.285 20:06:22 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:15.285 20:06:22 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:15.285 20:06:22 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:15.285 20:06:22 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:15.285 20:06:22 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:15.285 20:06:22 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:15.285 20:06:22 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:15.285 20:06:22 -- lvol/common.sh@41 -- # local offset=0 00:13:15.285 20:06:22 -- lvol/common.sh@42 -- # local size=134217728 00:13:15.285 20:06:22 -- lvol/common.sh@43 -- # local rw=write 00:13:15.285 20:06:22 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:15.285 20:06:22 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:15.285 20:06:22 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:15.285 20:06:22 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:15.285 20:06:22 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:15.285 20:06:22 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:15.285 20:06:22 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:15.285 fio: verification read phase will never start because write phase uses all of runtime 00:13:15.285 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:15.285 fio-3.35 00:13:15.285 Starting 1 process 00:13:27.511 00:13:27.511 fio_test: (groupid=0, jobs=1): err= 0: pid=69113: Mon Dec 16 20:06:32 2024 00:13:27.511 write: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(494MiB/10001msec); 0 zone resets 00:13:27.511 clat (usec): min=39, max=8832, avg=78.36, stdev=205.70 00:13:27.511 lat (usec): min=39, max=8848, avg=78.78, stdev=205.72 00:13:27.511 clat percentiles (usec): 00:13:27.511 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 52], 20.00th=[ 58], 00:13:27.511 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 69], 00:13:27.511 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 84], 00:13:27.511 | 99.00th=[ 99], 99.50th=[ 208], 99.90th=[ 3621], 99.95th=[ 3785], 00:13:27.511 | 99.99th=[ 4047] 00:13:27.511 bw ( KiB/s): min=33464, max=59992, per=99.20%, avg=50156.47, stdev=9179.88, samples=19 00:13:27.511 iops : min= 8366, max=14998, avg=12539.11, stdev=2294.97, samples=19 00:13:27.511 lat (usec) : 50=7.08%, 100=91.98%, 250=0.49%, 500=0.04%, 750=0.01% 00:13:27.511 lat (usec) : 1000=0.02% 00:13:27.511 lat (msec) : 2=0.04%, 4=0.33%, 10=0.02% 00:13:27.511 cpu : usr=1.81%, sys=11.19%, ctx=126458, majf=0, minf=796 00:13:27.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.511 issued rwts: total=0,126410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.511 00:13:27.511 Run status group 0 (all jobs): 00:13:27.511 WRITE: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=494MiB (518MB), run=10001-10001msec 00:13:27.511 00:13:27.511 Disk stats (read/write): 00:13:27.511 ublkb0: ios=0/124923, merge=0/0, ticks=0/8533, in_queue=8533, util=99.09% 00:13:27.511 20:06:32 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:27.511 20:06:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.511 20:06:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.511 [2024-12-16 20:06:32.985186] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:27.511 [2024-12-16 20:06:33.020349] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:27.511 [2024-12-16 20:06:33.021053] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:27.511 [2024-12-16 20:06:33.031366] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:27.511 [2024-12-16 20:06:33.031626] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:27.511 [2024-12-16 20:06:33.031636] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:27.511 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.511 20:06:33 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:27.511 20:06:33 -- common/autotest_common.sh@650 -- # local es=0 00:13:27.512 20:06:33 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:27.512 20:06:33 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:27.512 20:06:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.512 20:06:33 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:27.512 20:06:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.512 20:06:33 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:33.046411] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:27.512 request: 00:13:27.512 { 00:13:27.512 "ublk_id": 0, 00:13:27.512 "method": "ublk_stop_disk", 00:13:27.512 "req_id": 1 00:13:27.512 } 00:13:27.512 Got JSON-RPC error response 00:13:27.512 response: 00:13:27.512 { 00:13:27.512 "code": -19, 00:13:27.512 "message": "No such device" 00:13:27.512 } 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:27.512 20:06:33 -- common/autotest_common.sh@653 -- # es=1 00:13:27.512 20:06:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.512 20:06:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.512 20:06:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.512 20:06:33 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:33.059375] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:27.512 [2024-12-16 20:06:33.063396] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:27.512 [2024-12-16 20:06:33.063425] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:27.512 20:06:33 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:27.512 20:06:33 -- lvol/common.sh@26 -- # jq length 00:13:27.512 20:06:33 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:27.512 20:06:33 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:27.512 20:06:33 -- lvol/common.sh@28 -- # jq length 00:13:27.512 ************************************ 00:13:27.512 END TEST test_create_ublk 00:13:27.512 ************************************ 00:13:27.512 20:06:33 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:27.512 00:13:27.512 real 0m11.186s 00:13:27.512 user 0m0.476s 00:13:27.512 sys 0m1.192s 00:13:27.512 20:06:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:33 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:27.512 20:06:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:27.512 20:06:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 ************************************ 00:13:27.512 START TEST test_create_multi_ublk 00:13:27.512 ************************************ 00:13:27.512 20:06:33 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:27.512 20:06:33 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:33.582939] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:27.512 20:06:33 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:27.512 20:06:33 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.512 20:06:33 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:27.512 20:06:33 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:33.821441] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:27.512 [2024-12-16 20:06:33.821770] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:27.512 [2024-12-16 20:06:33.821782] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:27.512 [2024-12-16 20:06:33.821790] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.512 [2024-12-16 20:06:33.841327] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.512 [2024-12-16 20:06:33.841353] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.512 [2024-12-16 20:06:33.853321] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.512 [2024-12-16 20:06:33.853858] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:27.512 [2024-12-16 20:06:33.886324] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.512 20:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:33 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:27.512 20:06:33 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.512 20:06:33 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:27.512 20:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:34 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:27.512 20:06:34 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:27.512 20:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:34.118414] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:27.512 [2024-12-16 20:06:34.118735] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:27.512 [2024-12-16 20:06:34.118748] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:27.512 [2024-12-16 20:06:34.118754] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.512 [2024-12-16 20:06:34.126341] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.512 [2024-12-16 20:06:34.126363] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.512 [2024-12-16 20:06:34.134332] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.512 [2024-12-16 20:06:34.134867] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:27.512 [2024-12-16 20:06:34.143328] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.512 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:34 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:27.512 20:06:34 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.512 20:06:34 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:27.512 20:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:34 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:27.512 20:06:34 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:27.512 20:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:34.326444] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:27.512 [2024-12-16 20:06:34.326761] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:27.512 [2024-12-16 20:06:34.326768] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:27.512 [2024-12-16 20:06:34.326777] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.512 [2024-12-16 20:06:34.334340] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.512 [2024-12-16 20:06:34.334362] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.512 [2024-12-16 20:06:34.342326] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.512 [2024-12-16 20:06:34.342859] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:27.512 [2024-12-16 20:06:34.363330] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.512 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:34 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:27.512 20:06:34 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.512 20:06:34 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:27.512 20:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.512 20:06:34 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:27.512 20:06:34 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:27.512 20:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.512 20:06:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.512 [2024-12-16 20:06:34.538430] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:27.512 [2024-12-16 20:06:34.538750] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:27.512 [2024-12-16 20:06:34.538763] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:27.512 [2024-12-16 20:06:34.538768] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.512 [2024-12-16 20:06:34.542619] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.512 [2024-12-16 20:06:34.542636] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.512 [2024-12-16 20:06:34.553326] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.512 [2024-12-16 20:06:34.553849] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:27.513 [2024-12-16 20:06:34.570333] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.513 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:27.513 20:06:34 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:27.513 20:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.513 20:06:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.513 20:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:27.513 { 00:13:27.513 "ublk_device": "/dev/ublkb0", 00:13:27.513 "id": 0, 00:13:27.513 "queue_depth": 512, 00:13:27.513 "num_queues": 4, 00:13:27.513 "bdev_name": "Malloc0" 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "ublk_device": "/dev/ublkb1", 00:13:27.513 "id": 1, 00:13:27.513 "queue_depth": 512, 00:13:27.513 "num_queues": 4, 00:13:27.513 "bdev_name": "Malloc1" 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "ublk_device": "/dev/ublkb2", 00:13:27.513 "id": 2, 00:13:27.513 "queue_depth": 512, 00:13:27.513 "num_queues": 4, 00:13:27.513 "bdev_name": "Malloc2" 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "ublk_device": "/dev/ublkb3", 00:13:27.513 "id": 3, 00:13:27.513 "queue_depth": 512, 00:13:27.513 "num_queues": 4, 00:13:27.513 "bdev_name": "Malloc3" 00:13:27.513 } 00:13:27.513 ]' 00:13:27.513 20:06:34 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:27.513 20:06:34 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.513 20:06:34 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:27.513 20:06:34 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:27.513 20:06:34 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:27.513 20:06:34 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:27.513 20:06:34 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:27.513 20:06:34 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.513 20:06:34 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:27.513 20:06:34 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:27.513 20:06:34 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:27.513 20:06:34 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:27.513 20:06:34 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:27.513 20:06:34 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.513 20:06:34 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:27.513 20:06:34 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:27.513 20:06:34 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:27.513 20:06:34 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:27.513 20:06:35 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.513 20:06:35 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:27.513 20:06:35 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.513 20:06:35 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:27.513 20:06:35 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:27.513 20:06:35 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.513 20:06:35 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:27.513 20:06:35 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:27.513 20:06:35 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:27.513 20:06:35 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:27.513 20:06:35 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:27.771 20:06:35 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:27.771 20:06:35 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:27.771 20:06:35 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:27.771 20:06:35 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.771 20:06:35 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:27.771 20:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.771 20:06:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.771 [2024-12-16 20:06:35.240399] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:27.771 [2024-12-16 20:06:35.272903] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:27.771 [2024-12-16 20:06:35.273988] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:27.771 [2024-12-16 20:06:35.288334] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:27.771 [2024-12-16 20:06:35.288586] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:27.771 [2024-12-16 20:06:35.288600] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:27.771 20:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.771 20:06:35 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:27.771 20:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.771 20:06:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.771 [2024-12-16 20:06:35.304394] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:27.771 [2024-12-16 20:06:35.337906] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:27.771 [2024-12-16 20:06:35.338958] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:27.771 [2024-12-16 20:06:35.348326] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:27.771 [2024-12-16 20:06:35.348575] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:27.771 [2024-12-16 20:06:35.348589] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:27.771 20:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.771 20:06:35 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:27.771 20:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.771 20:06:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.771 [2024-12-16 20:06:35.364374] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:27.771 [2024-12-16 20:06:35.390905] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:27.771 [2024-12-16 20:06:35.391911] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:27.771 [2024-12-16 20:06:35.396336] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:27.771 [2024-12-16 20:06:35.396573] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:27.771 [2024-12-16 20:06:35.396589] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:27.771 20:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.771 20:06:35 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.771 20:06:35 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:27.771 20:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.771 20:06:35 -- common/autotest_common.sh@10 -- # set +x 00:13:28.071 [2024-12-16 20:06:35.412395] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.071 [2024-12-16 20:06:35.456356] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.071 [2024-12-16 20:06:35.456995] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.071 [2024-12-16 20:06:35.464332] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.071 [2024-12-16 20:06:35.464568] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:28.071 [2024-12-16 20:06:35.464581] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:28.071 20:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.071 20:06:35 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:28.071 [2024-12-16 20:06:35.648403] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:28.071 [2024-12-16 20:06:35.656317] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:28.071 [2024-12-16 20:06:35.656344] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:28.071 20:06:35 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:28.071 20:06:35 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.071 20:06:35 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:28.071 20:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.071 20:06:35 -- common/autotest_common.sh@10 -- # set +x 00:13:28.673 20:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.673 20:06:36 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.673 20:06:36 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:28.673 20:06:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.673 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 20:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.932 20:06:36 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.932 20:06:36 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:28.932 20:06:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.932 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.191 20:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.191 20:06:36 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.191 20:06:36 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:29.191 20:06:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.191 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.449 20:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.449 20:06:36 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:29.449 20:06:36 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:29.449 20:06:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.449 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.449 20:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.449 20:06:36 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:29.449 20:06:36 -- lvol/common.sh@26 -- # jq length 00:13:29.449 20:06:36 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:29.449 20:06:36 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:29.449 20:06:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.449 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.449 20:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.449 20:06:36 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:29.449 20:06:36 -- lvol/common.sh@28 -- # jq length 00:13:29.449 ************************************ 00:13:29.449 END TEST test_create_multi_ublk 00:13:29.449 ************************************ 00:13:29.449 20:06:36 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:29.449 00:13:29.449 real 0m3.389s 00:13:29.449 user 0m0.810s 00:13:29.449 sys 0m0.140s 00:13:29.449 20:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.449 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.449 20:06:36 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:29.449 20:06:36 -- ublk/ublk.sh@147 -- # cleanup 00:13:29.449 20:06:36 -- ublk/ublk.sh@130 -- # killprocess 69069 00:13:29.449 20:06:36 -- common/autotest_common.sh@936 -- # '[' -z 69069 ']' 00:13:29.449 20:06:36 -- common/autotest_common.sh@940 -- # kill -0 69069 00:13:29.449 20:06:36 -- common/autotest_common.sh@941 -- # uname 00:13:29.449 20:06:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:29.449 20:06:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69069 00:13:29.449 killing process with pid 69069 00:13:29.449 20:06:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:29.449 20:06:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:29.449 20:06:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69069' 00:13:29.449 20:06:36 -- common/autotest_common.sh@955 -- # kill 69069 00:13:29.449 20:06:36 -- common/autotest_common.sh@960 -- # wait 69069 00:13:30.017 [2024-12-16 20:06:37.573141] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:30.017 [2024-12-16 20:06:37.573196] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:30.954 00:13:30.954 real 0m25.328s 00:13:30.954 user 0m35.112s 00:13:30.954 sys 0m10.103s 00:13:30.954 20:06:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:30.954 ************************************ 00:13:30.954 END TEST ublk 00:13:30.954 ************************************ 00:13:30.954 20:06:38 -- common/autotest_common.sh@10 -- # set +x 00:13:30.954 20:06:38 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:30.954 20:06:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:30.954 20:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.954 20:06:38 -- common/autotest_common.sh@10 -- # set +x 00:13:30.954 ************************************ 00:13:30.954 START TEST ublk_recovery 00:13:30.954 ************************************ 00:13:30.954 20:06:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:30.954 * Looking for test storage... 00:13:30.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:30.954 20:06:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:30.954 20:06:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:30.954 20:06:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:30.954 20:06:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:30.954 20:06:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:30.954 20:06:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:30.954 20:06:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:30.954 20:06:38 -- scripts/common.sh@335 -- # IFS=.-: 00:13:30.954 20:06:38 -- scripts/common.sh@335 -- # read -ra ver1 00:13:30.954 20:06:38 -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.954 20:06:38 -- scripts/common.sh@336 -- # read -ra ver2 00:13:30.954 20:06:38 -- scripts/common.sh@337 -- # local 'op=<' 00:13:30.954 20:06:38 -- scripts/common.sh@339 -- # ver1_l=2 00:13:30.954 20:06:38 -- scripts/common.sh@340 -- # ver2_l=1 00:13:30.954 20:06:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:30.954 20:06:38 -- scripts/common.sh@343 -- # case "$op" in 00:13:30.954 20:06:38 -- scripts/common.sh@344 -- # : 1 00:13:30.954 20:06:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:30.954 20:06:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.954 20:06:38 -- scripts/common.sh@364 -- # decimal 1 00:13:30.954 20:06:38 -- scripts/common.sh@352 -- # local d=1 00:13:30.954 20:06:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.954 20:06:38 -- scripts/common.sh@354 -- # echo 1 00:13:30.954 20:06:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:30.954 20:06:38 -- scripts/common.sh@365 -- # decimal 2 00:13:30.954 20:06:38 -- scripts/common.sh@352 -- # local d=2 00:13:30.954 20:06:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.954 20:06:38 -- scripts/common.sh@354 -- # echo 2 00:13:30.954 20:06:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:30.954 20:06:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:30.954 20:06:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:30.954 20:06:38 -- scripts/common.sh@367 -- # return 0 00:13:30.954 20:06:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.954 20:06:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:30.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.955 --rc genhtml_branch_coverage=1 00:13:30.955 --rc genhtml_function_coverage=1 00:13:30.955 --rc genhtml_legend=1 00:13:30.955 --rc geninfo_all_blocks=1 00:13:30.955 --rc geninfo_unexecuted_blocks=1 00:13:30.955 00:13:30.955 ' 00:13:30.955 20:06:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:30.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.955 --rc genhtml_branch_coverage=1 00:13:30.955 --rc genhtml_function_coverage=1 00:13:30.955 --rc genhtml_legend=1 00:13:30.955 --rc geninfo_all_blocks=1 00:13:30.955 --rc geninfo_unexecuted_blocks=1 00:13:30.955 00:13:30.955 ' 00:13:30.955 20:06:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:30.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.955 --rc genhtml_branch_coverage=1 00:13:30.955 --rc genhtml_function_coverage=1 00:13:30.955 --rc genhtml_legend=1 00:13:30.955 --rc geninfo_all_blocks=1 00:13:30.955 --rc geninfo_unexecuted_blocks=1 00:13:30.955 00:13:30.955 ' 00:13:30.955 20:06:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:30.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.955 --rc genhtml_branch_coverage=1 00:13:30.955 --rc genhtml_function_coverage=1 00:13:30.955 --rc genhtml_legend=1 00:13:30.955 --rc geninfo_all_blocks=1 00:13:30.955 --rc geninfo_unexecuted_blocks=1 00:13:30.955 00:13:30.955 ' 00:13:30.955 20:06:38 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:30.955 20:06:38 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:30.955 20:06:38 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:30.955 20:06:38 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:30.955 20:06:38 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:30.955 20:06:38 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:30.955 20:06:38 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:30.955 20:06:38 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:30.955 20:06:38 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:30.955 20:06:38 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:30.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.955 20:06:38 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69464 00:13:30.955 20:06:38 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.955 20:06:38 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69464 00:13:30.955 20:06:38 -- common/autotest_common.sh@829 -- # '[' -z 69464 ']' 00:13:30.955 20:06:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.955 20:06:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.955 20:06:38 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:30.955 20:06:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.955 20:06:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.955 20:06:38 -- common/autotest_common.sh@10 -- # set +x 00:13:30.955 [2024-12-16 20:06:38.519457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:30.955 [2024-12-16 20:06:38.519574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69464 ] 00:13:31.214 [2024-12-16 20:06:38.667604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.473 [2024-12-16 20:06:38.856139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.473 [2024-12-16 20:06:38.856520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.473 [2024-12-16 20:06:38.856530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.408 20:06:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.408 20:06:40 -- common/autotest_common.sh@862 -- # return 0 00:13:32.408 20:06:40 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:32.408 20:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.408 20:06:40 -- common/autotest_common.sh@10 -- # set +x 00:13:32.408 [2024-12-16 20:06:40.020093] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:32.408 20:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.408 20:06:40 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:32.408 20:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.408 20:06:40 -- common/autotest_common.sh@10 -- # set +x 00:13:32.666 malloc0 00:13:32.666 20:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.666 20:06:40 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:32.666 20:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.666 20:06:40 -- common/autotest_common.sh@10 -- # set +x 00:13:32.666 [2024-12-16 20:06:40.114446] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:32.666 [2024-12-16 20:06:40.114546] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:32.666 [2024-12-16 20:06:40.114552] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:32.666 [2024-12-16 20:06:40.114560] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.666 [2024-12-16 20:06:40.123441] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.666 [2024-12-16 20:06:40.123472] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.666 [2024-12-16 20:06:40.130323] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:32.666 [2024-12-16 20:06:40.130452] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:32.666 [2024-12-16 20:06:40.146336] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:32.666 1 00:13:32.666 20:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.666 20:06:40 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:33.601 20:06:41 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69506 00:13:33.601 20:06:41 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:33.601 20:06:41 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:33.860 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.860 fio-3.35 00:13:33.860 Starting 1 process 00:13:39.128 20:06:46 -- ublk/ublk_recovery.sh@36 -- # kill -9 69464 00:13:39.128 20:06:46 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:44.409 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69464 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:44.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.409 20:06:51 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69617 00:13:44.409 20:06:51 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:44.409 20:06:51 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69617 00:13:44.409 20:06:51 -- common/autotest_common.sh@829 -- # '[' -z 69617 ']' 00:13:44.409 20:06:51 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:44.409 20:06:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.409 20:06:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.409 20:06:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.409 20:06:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.409 20:06:51 -- common/autotest_common.sh@10 -- # set +x 00:13:44.409 [2024-12-16 20:06:51.236086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:44.409 [2024-12-16 20:06:51.236707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69617 ] 00:13:44.409 [2024-12-16 20:06:51.384336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.409 [2024-12-16 20:06:51.578375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.409 [2024-12-16 20:06:51.578917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.409 [2024-12-16 20:06:51.579009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.346 20:06:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.346 20:06:52 -- common/autotest_common.sh@862 -- # return 0 00:13:45.346 20:06:52 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:45.346 20:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.346 20:06:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.346 [2024-12-16 20:06:52.704838] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:45.346 20:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.346 20:06:52 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:45.346 20:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.346 20:06:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.346 malloc0 00:13:45.346 20:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.346 20:06:52 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:45.346 20:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.346 20:06:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.346 [2024-12-16 20:06:52.792430] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:45.346 [2024-12-16 20:06:52.792469] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:45.346 [2024-12-16 20:06:52.792476] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:45.346 [2024-12-16 20:06:52.800361] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:45.346 [2024-12-16 20:06:52.800380] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:45.346 [2024-12-16 20:06:52.800443] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:45.346 1 00:13:45.346 20:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.346 20:06:52 -- ublk/ublk_recovery.sh@52 -- # wait 69506 00:14:11.889 [2024-12-16 20:07:16.480326] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:11.889 [2024-12-16 20:07:16.487118] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:11.889 [2024-12-16 20:07:16.494526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:11.889 [2024-12-16 20:07:16.494552] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:33.880 00:14:33.880 fio_test: (groupid=0, jobs=1): err= 0: pid=69509: Mon Dec 16 20:07:41 2024 00:14:33.880 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(3204MiB/60002msec) 00:14:33.880 slat (nsec): min=1240, max=342013, avg=5531.26, stdev=1627.87 00:14:33.880 clat (usec): min=830, max=30345k, avg=4521.24, stdev=261646.06 00:14:33.880 lat (usec): min=837, max=30345k, avg=4526.77, stdev=261646.05 00:14:33.880 clat percentiles (usec): 00:14:33.880 | 1.00th=[ 1876], 5.00th=[ 2024], 10.00th=[ 2057], 20.00th=[ 2089], 00:14:33.880 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2147], 00:14:33.880 | 70.00th=[ 2180], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3228], 00:14:33.880 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 7439], 99.95th=[ 8029], 00:14:33.880 | 99.99th=[13042] 00:14:33.880 bw ( KiB/s): min=41005, max=114864, per=100.00%, avg=109436.42, stdev=13338.42, samples=59 00:14:33.880 iops : min=10251, max=28716, avg=27359.10, stdev=3334.63, samples=59 00:14:33.880 write: IOPS=13.7k, BW=53.3MiB/s (55.9MB/s)(3200MiB/60002msec); 0 zone resets 00:14:33.880 slat (nsec): min=1464, max=317175, avg=5813.11, stdev=1552.61 00:14:33.880 clat (usec): min=725, max=30346k, avg=4835.09, stdev=274374.92 00:14:33.880 lat (usec): min=731, max=30346k, avg=4840.90, stdev=274374.92 00:14:33.880 clat percentiles (usec): 00:14:33.880 | 1.00th=[ 1926], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2180], 00:14:33.880 | 30.00th=[ 2212], 40.00th=[ 2212], 50.00th=[ 2245], 60.00th=[ 2245], 00:14:33.880 | 70.00th=[ 2278], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3163], 00:14:33.880 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7439], 99.95th=[ 8029], 00:14:33.880 | 99.99th=[13173] 00:14:33.880 bw ( KiB/s): min=41644, max=114176, per=100.00%, avg=109286.85, stdev=13126.26, samples=59 00:14:33.880 iops : min=10411, max=28544, avg=27321.71, stdev=3281.56, samples=59 00:14:33.880 lat (usec) : 750=0.01%, 1000=0.01% 00:14:33.880 lat (msec) : 2=2.34%, 4=94.70%, 10=2.93%, 20=0.02%, >=2000=0.01% 00:14:33.880 cpu : usr=2.95%, sys=15.95%, ctx=53635, majf=0, minf=14 00:14:33.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:33.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:33.880 issued rwts: total=820275,819305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:33.880 00:14:33.880 Run status group 0 (all jobs): 00:14:33.880 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=3204MiB (3360MB), run=60002-60002msec 00:14:33.880 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=3200MiB (3356MB), run=60002-60002msec 00:14:33.880 00:14:33.880 Disk stats (read/write): 00:14:33.880 ublkb1: ios=817104/816157, merge=0/0, ticks=3656879/3838938, in_queue=7495817, util=99.89% 00:14:33.880 20:07:41 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:33.880 20:07:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.880 20:07:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.880 [2024-12-16 20:07:41.410241] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:33.880 [2024-12-16 20:07:41.441354] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:33.880 [2024-12-16 20:07:41.441595] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:33.880 [2024-12-16 20:07:41.457345] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:33.880 [2024-12-16 20:07:41.461405] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:33.880 [2024-12-16 20:07:41.461419] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:33.880 20:07:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.880 20:07:41 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:33.880 20:07:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.880 20:07:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.880 [2024-12-16 20:07:41.465432] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:33.880 [2024-12-16 20:07:41.473271] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:33.880 [2024-12-16 20:07:41.477320] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:33.880 20:07:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.880 20:07:41 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:33.880 20:07:41 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:33.880 20:07:41 -- ublk/ublk_recovery.sh@14 -- # killprocess 69617 00:14:33.880 20:07:41 -- common/autotest_common.sh@936 -- # '[' -z 69617 ']' 00:14:33.880 20:07:41 -- common/autotest_common.sh@940 -- # kill -0 69617 00:14:33.880 20:07:41 -- common/autotest_common.sh@941 -- # uname 00:14:33.880 20:07:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.880 20:07:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69617 00:14:33.880 20:07:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:33.880 20:07:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:33.880 20:07:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69617' 00:14:33.880 killing process with pid 69617 00:14:33.880 20:07:41 -- common/autotest_common.sh@955 -- # kill 69617 00:14:33.880 20:07:41 -- common/autotest_common.sh@960 -- # wait 69617 00:14:35.256 [2024-12-16 20:07:42.566525] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:35.256 [2024-12-16 20:07:42.566725] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:35.825 00:14:35.825 real 1m5.023s 00:14:35.825 user 1m47.264s 00:14:35.825 sys 0m23.380s 00:14:35.825 20:07:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:35.825 20:07:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.825 ************************************ 00:14:35.825 END TEST ublk_recovery 00:14:35.825 ************************************ 00:14:35.825 20:07:43 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:35.825 20:07:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.825 20:07:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.825 20:07:43 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:35.825 20:07:43 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:35.825 20:07:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:35.825 20:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.825 20:07:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.825 ************************************ 00:14:35.825 START TEST ftl 00:14:35.825 ************************************ 00:14:35.825 20:07:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:36.086 * Looking for test storage... 00:14:36.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:36.086 20:07:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:36.086 20:07:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:36.086 20:07:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:36.086 20:07:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:36.086 20:07:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:36.086 20:07:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:36.086 20:07:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:36.086 20:07:43 -- scripts/common.sh@335 -- # IFS=.-: 00:14:36.086 20:07:43 -- scripts/common.sh@335 -- # read -ra ver1 00:14:36.086 20:07:43 -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.086 20:07:43 -- scripts/common.sh@336 -- # read -ra ver2 00:14:36.086 20:07:43 -- scripts/common.sh@337 -- # local 'op=<' 00:14:36.086 20:07:43 -- scripts/common.sh@339 -- # ver1_l=2 00:14:36.086 20:07:43 -- scripts/common.sh@340 -- # ver2_l=1 00:14:36.086 20:07:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:36.086 20:07:43 -- scripts/common.sh@343 -- # case "$op" in 00:14:36.086 20:07:43 -- scripts/common.sh@344 -- # : 1 00:14:36.086 20:07:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:36.086 20:07:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.086 20:07:43 -- scripts/common.sh@364 -- # decimal 1 00:14:36.086 20:07:43 -- scripts/common.sh@352 -- # local d=1 00:14:36.086 20:07:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.086 20:07:43 -- scripts/common.sh@354 -- # echo 1 00:14:36.086 20:07:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:36.086 20:07:43 -- scripts/common.sh@365 -- # decimal 2 00:14:36.086 20:07:43 -- scripts/common.sh@352 -- # local d=2 00:14:36.086 20:07:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.086 20:07:43 -- scripts/common.sh@354 -- # echo 2 00:14:36.086 20:07:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:36.086 20:07:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:36.086 20:07:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:36.086 20:07:43 -- scripts/common.sh@367 -- # return 0 00:14:36.087 20:07:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.087 20:07:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:36.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.087 --rc genhtml_branch_coverage=1 00:14:36.087 --rc genhtml_function_coverage=1 00:14:36.087 --rc genhtml_legend=1 00:14:36.087 --rc geninfo_all_blocks=1 00:14:36.087 --rc geninfo_unexecuted_blocks=1 00:14:36.087 00:14:36.087 ' 00:14:36.087 20:07:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:36.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.087 --rc genhtml_branch_coverage=1 00:14:36.087 --rc genhtml_function_coverage=1 00:14:36.087 --rc genhtml_legend=1 00:14:36.087 --rc geninfo_all_blocks=1 00:14:36.087 --rc geninfo_unexecuted_blocks=1 00:14:36.087 00:14:36.087 ' 00:14:36.087 20:07:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:36.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.087 --rc genhtml_branch_coverage=1 00:14:36.087 --rc genhtml_function_coverage=1 00:14:36.087 --rc genhtml_legend=1 00:14:36.087 --rc geninfo_all_blocks=1 00:14:36.087 --rc geninfo_unexecuted_blocks=1 00:14:36.087 00:14:36.087 ' 00:14:36.087 20:07:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:36.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.087 --rc genhtml_branch_coverage=1 00:14:36.087 --rc genhtml_function_coverage=1 00:14:36.087 --rc genhtml_legend=1 00:14:36.087 --rc geninfo_all_blocks=1 00:14:36.087 --rc geninfo_unexecuted_blocks=1 00:14:36.087 00:14:36.087 ' 00:14:36.087 20:07:43 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:36.087 20:07:43 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:36.087 20:07:43 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:36.087 20:07:43 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:36.087 20:07:43 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:36.087 20:07:43 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:36.087 20:07:43 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.087 20:07:43 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:36.087 20:07:43 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:36.087 20:07:43 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.087 20:07:43 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.087 20:07:43 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:36.087 20:07:43 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:36.087 20:07:43 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:36.087 20:07:43 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:36.087 20:07:43 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:36.087 20:07:43 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:36.087 20:07:43 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.087 20:07:43 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.087 20:07:43 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:36.087 20:07:43 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:36.087 20:07:43 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:36.087 20:07:43 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:36.087 20:07:43 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:36.087 20:07:43 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:36.087 20:07:43 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:36.087 20:07:43 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:36.087 20:07:43 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:36.087 20:07:43 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:36.087 20:07:43 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.087 20:07:43 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:36.087 20:07:43 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:36.087 20:07:43 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:36.087 20:07:43 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:36.087 20:07:43 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:36.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.659 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.659 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.659 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.659 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.659 20:07:44 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70432 00:14:36.659 20:07:44 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:36.659 20:07:44 -- ftl/ftl.sh@38 -- # waitforlisten 70432 00:14:36.659 20:07:44 -- common/autotest_common.sh@829 -- # '[' -z 70432 ']' 00:14:36.659 20:07:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.659 20:07:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.659 20:07:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.659 20:07:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.659 20:07:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.660 [2024-12-16 20:07:44.142126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:36.660 [2024-12-16 20:07:44.142506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70432 ] 00:14:36.660 [2024-12-16 20:07:44.293828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.922 [2024-12-16 20:07:44.530466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:36.922 [2024-12-16 20:07:44.530983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.496 20:07:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.496 20:07:44 -- common/autotest_common.sh@862 -- # return 0 00:14:37.496 20:07:44 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:37.757 20:07:45 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:38.712 20:07:46 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:38.712 20:07:46 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:38.973 20:07:46 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:38.973 20:07:46 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:38.973 20:07:46 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:39.234 20:07:46 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:39.234 20:07:46 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:39.234 20:07:46 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:39.234 20:07:46 -- ftl/ftl.sh@50 -- # break 00:14:39.234 20:07:46 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:39.234 20:07:46 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:39.234 20:07:46 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:39.234 20:07:46 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:39.496 20:07:46 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:39.496 20:07:46 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:39.496 20:07:46 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:39.496 20:07:46 -- ftl/ftl.sh@63 -- # break 00:14:39.496 20:07:46 -- ftl/ftl.sh@66 -- # killprocess 70432 00:14:39.496 20:07:46 -- common/autotest_common.sh@936 -- # '[' -z 70432 ']' 00:14:39.496 20:07:46 -- common/autotest_common.sh@940 -- # kill -0 70432 00:14:39.496 20:07:46 -- common/autotest_common.sh@941 -- # uname 00:14:39.496 20:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.496 20:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70432 00:14:39.496 killing process with pid 70432 00:14:39.496 20:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:39.496 20:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:39.496 20:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70432' 00:14:39.496 20:07:46 -- common/autotest_common.sh@955 -- # kill 70432 00:14:39.496 20:07:46 -- common/autotest_common.sh@960 -- # wait 70432 00:14:40.877 20:07:48 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:40.877 20:07:48 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:40.877 20:07:48 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:40.877 20:07:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:40.877 20:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.877 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:14:40.877 ************************************ 00:14:40.877 START TEST ftl_fio_basic 00:14:40.877 ************************************ 00:14:40.877 20:07:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:40.877 * Looking for test storage... 00:14:40.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.877 20:07:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:40.877 20:07:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:40.877 20:07:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:40.877 20:07:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:40.877 20:07:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:40.877 20:07:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:40.877 20:07:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:40.877 20:07:48 -- scripts/common.sh@335 -- # IFS=.-: 00:14:40.877 20:07:48 -- scripts/common.sh@335 -- # read -ra ver1 00:14:40.877 20:07:48 -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.877 20:07:48 -- scripts/common.sh@336 -- # read -ra ver2 00:14:40.877 20:07:48 -- scripts/common.sh@337 -- # local 'op=<' 00:14:40.877 20:07:48 -- scripts/common.sh@339 -- # ver1_l=2 00:14:40.877 20:07:48 -- scripts/common.sh@340 -- # ver2_l=1 00:14:40.877 20:07:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:40.877 20:07:48 -- scripts/common.sh@343 -- # case "$op" in 00:14:40.877 20:07:48 -- scripts/common.sh@344 -- # : 1 00:14:40.877 20:07:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:40.877 20:07:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.877 20:07:48 -- scripts/common.sh@364 -- # decimal 1 00:14:40.877 20:07:48 -- scripts/common.sh@352 -- # local d=1 00:14:40.877 20:07:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.877 20:07:48 -- scripts/common.sh@354 -- # echo 1 00:14:40.877 20:07:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:40.877 20:07:48 -- scripts/common.sh@365 -- # decimal 2 00:14:40.877 20:07:48 -- scripts/common.sh@352 -- # local d=2 00:14:40.877 20:07:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.877 20:07:48 -- scripts/common.sh@354 -- # echo 2 00:14:40.877 20:07:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:40.877 20:07:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:40.877 20:07:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:40.877 20:07:48 -- scripts/common.sh@367 -- # return 0 00:14:40.877 20:07:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.877 20:07:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.877 --rc genhtml_branch_coverage=1 00:14:40.877 --rc genhtml_function_coverage=1 00:14:40.877 --rc genhtml_legend=1 00:14:40.877 --rc geninfo_all_blocks=1 00:14:40.877 --rc geninfo_unexecuted_blocks=1 00:14:40.877 00:14:40.877 ' 00:14:40.877 20:07:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.877 --rc genhtml_branch_coverage=1 00:14:40.877 --rc genhtml_function_coverage=1 00:14:40.877 --rc genhtml_legend=1 00:14:40.877 --rc geninfo_all_blocks=1 00:14:40.877 --rc geninfo_unexecuted_blocks=1 00:14:40.877 00:14:40.877 ' 00:14:40.877 20:07:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.877 --rc genhtml_branch_coverage=1 00:14:40.877 --rc genhtml_function_coverage=1 00:14:40.877 --rc genhtml_legend=1 00:14:40.877 --rc geninfo_all_blocks=1 00:14:40.877 --rc geninfo_unexecuted_blocks=1 00:14:40.877 00:14:40.877 ' 00:14:40.877 20:07:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.878 --rc genhtml_branch_coverage=1 00:14:40.878 --rc genhtml_function_coverage=1 00:14:40.878 --rc genhtml_legend=1 00:14:40.878 --rc geninfo_all_blocks=1 00:14:40.878 --rc geninfo_unexecuted_blocks=1 00:14:40.878 00:14:40.878 ' 00:14:40.878 20:07:48 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:40.878 20:07:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:40.878 20:07:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.878 20:07:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.878 20:07:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:40.878 20:07:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:40.878 20:07:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.878 20:07:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:40.878 20:07:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:40.878 20:07:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.878 20:07:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.878 20:07:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:40.878 20:07:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:40.878 20:07:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:40.878 20:07:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:40.878 20:07:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:40.878 20:07:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:40.878 20:07:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.878 20:07:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.878 20:07:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:40.878 20:07:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:40.878 20:07:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:40.878 20:07:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:40.878 20:07:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:40.878 20:07:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:40.878 20:07:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:40.878 20:07:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:40.878 20:07:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.878 20:07:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.878 20:07:48 -- ftl/fio.sh@11 -- # declare -A suite 00:14:40.878 20:07:48 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:40.878 20:07:48 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:40.878 20:07:48 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:40.878 20:07:48 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.878 20:07:48 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:40.878 20:07:48 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:40.878 20:07:48 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:40.878 20:07:48 -- ftl/fio.sh@26 -- # uuid= 00:14:40.878 20:07:48 -- ftl/fio.sh@27 -- # timeout=240 00:14:40.878 20:07:48 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:40.878 20:07:48 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:40.878 20:07:48 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:40.878 20:07:48 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:40.878 20:07:48 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:40.878 20:07:48 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:40.878 20:07:48 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:40.878 20:07:48 -- ftl/fio.sh@45 -- # svcpid=70569 00:14:40.878 20:07:48 -- ftl/fio.sh@46 -- # waitforlisten 70569 00:14:40.878 20:07:48 -- common/autotest_common.sh@829 -- # '[' -z 70569 ']' 00:14:40.878 20:07:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.878 20:07:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.878 20:07:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.878 20:07:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.878 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:14:40.878 20:07:48 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:41.137 [2024-12-16 20:07:48.564025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:41.137 [2024-12-16 20:07:48.564865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70569 ] 00:14:41.137 [2024-12-16 20:07:48.712581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.396 [2024-12-16 20:07:48.884048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.396 [2024-12-16 20:07:48.884608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.396 [2024-12-16 20:07:48.885028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.396 [2024-12-16 20:07:48.885032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.771 20:07:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.771 20:07:50 -- common/autotest_common.sh@862 -- # return 0 00:14:42.771 20:07:50 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:42.771 20:07:50 -- ftl/common.sh@54 -- # local name=nvme0 00:14:42.771 20:07:50 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:42.771 20:07:50 -- ftl/common.sh@56 -- # local size=103424 00:14:42.771 20:07:50 -- ftl/common.sh@59 -- # local base_bdev 00:14:42.772 20:07:50 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:42.772 20:07:50 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:42.772 20:07:50 -- ftl/common.sh@62 -- # local base_size 00:14:42.772 20:07:50 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:42.772 20:07:50 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:42.772 20:07:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:42.772 20:07:50 -- common/autotest_common.sh@1369 -- # local bs 00:14:42.772 20:07:50 -- common/autotest_common.sh@1370 -- # local nb 00:14:42.772 20:07:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:43.035 20:07:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:43.035 { 00:14:43.035 "name": "nvme0n1", 00:14:43.035 "aliases": [ 00:14:43.035 "4517a5f0-85ae-4fd2-ae15-62a211d0a7d4" 00:14:43.035 ], 00:14:43.035 "product_name": "NVMe disk", 00:14:43.035 "block_size": 4096, 00:14:43.035 "num_blocks": 1310720, 00:14:43.035 "uuid": "4517a5f0-85ae-4fd2-ae15-62a211d0a7d4", 00:14:43.035 "assigned_rate_limits": { 00:14:43.035 "rw_ios_per_sec": 0, 00:14:43.035 "rw_mbytes_per_sec": 0, 00:14:43.035 "r_mbytes_per_sec": 0, 00:14:43.035 "w_mbytes_per_sec": 0 00:14:43.035 }, 00:14:43.035 "claimed": false, 00:14:43.035 "zoned": false, 00:14:43.035 "supported_io_types": { 00:14:43.035 "read": true, 00:14:43.035 "write": true, 00:14:43.035 "unmap": true, 00:14:43.035 "write_zeroes": true, 00:14:43.035 "flush": true, 00:14:43.035 "reset": true, 00:14:43.035 "compare": true, 00:14:43.035 "compare_and_write": false, 00:14:43.035 "abort": true, 00:14:43.035 "nvme_admin": true, 00:14:43.035 "nvme_io": true 00:14:43.035 }, 00:14:43.035 "driver_specific": { 00:14:43.035 "nvme": [ 00:14:43.035 { 00:14:43.035 "pci_address": "0000:00:07.0", 00:14:43.035 "trid": { 00:14:43.035 "trtype": "PCIe", 00:14:43.035 "traddr": "0000:00:07.0" 00:14:43.035 }, 00:14:43.035 "ctrlr_data": { 00:14:43.035 "cntlid": 0, 00:14:43.035 "vendor_id": "0x1b36", 00:14:43.035 "model_number": "QEMU NVMe Ctrl", 00:14:43.035 "serial_number": "12341", 00:14:43.035 "firmware_revision": "8.0.0", 00:14:43.035 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:43.035 "oacs": { 00:14:43.035 "security": 0, 00:14:43.035 "format": 1, 00:14:43.035 "firmware": 0, 00:14:43.035 "ns_manage": 1 00:14:43.035 }, 00:14:43.035 "multi_ctrlr": false, 00:14:43.035 "ana_reporting": false 00:14:43.035 }, 00:14:43.035 "vs": { 00:14:43.035 "nvme_version": "1.4" 00:14:43.035 }, 00:14:43.035 "ns_data": { 00:14:43.035 "id": 1, 00:14:43.035 "can_share": false 00:14:43.035 } 00:14:43.035 } 00:14:43.035 ], 00:14:43.035 "mp_policy": "active_passive" 00:14:43.035 } 00:14:43.035 } 00:14:43.035 ]' 00:14:43.035 20:07:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:43.035 20:07:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:43.035 20:07:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:43.035 20:07:50 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:43.035 20:07:50 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:43.035 20:07:50 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:43.035 20:07:50 -- ftl/common.sh@63 -- # base_size=5120 00:14:43.035 20:07:50 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:43.035 20:07:50 -- ftl/common.sh@67 -- # clear_lvols 00:14:43.035 20:07:50 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:43.035 20:07:50 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:43.293 20:07:50 -- ftl/common.sh@28 -- # stores= 00:14:43.293 20:07:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:43.552 20:07:50 -- ftl/common.sh@68 -- # lvs=ac6b55e8-1535-4ce2-91fb-776ea3f7f627 00:14:43.552 20:07:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ac6b55e8-1535-4ce2-91fb-776ea3f7f627 00:14:43.552 20:07:51 -- ftl/fio.sh@48 -- # split_bdev=ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:43.552 20:07:51 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:43.552 20:07:51 -- ftl/common.sh@35 -- # local name=nvc0 00:14:43.552 20:07:51 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:43.552 20:07:51 -- ftl/common.sh@37 -- # local base_bdev=ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:43.552 20:07:51 -- ftl/common.sh@38 -- # local cache_size= 00:14:43.552 20:07:51 -- ftl/common.sh@41 -- # get_bdev_size ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:43.552 20:07:51 -- common/autotest_common.sh@1367 -- # local bdev_name=ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:43.552 20:07:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:43.552 20:07:51 -- common/autotest_common.sh@1369 -- # local bs 00:14:43.552 20:07:51 -- common/autotest_common.sh@1370 -- # local nb 00:14:43.552 20:07:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:43.810 20:07:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:43.810 { 00:14:43.810 "name": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:43.810 "aliases": [ 00:14:43.810 "lvs/nvme0n1p0" 00:14:43.810 ], 00:14:43.810 "product_name": "Logical Volume", 00:14:43.810 "block_size": 4096, 00:14:43.810 "num_blocks": 26476544, 00:14:43.810 "uuid": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:43.810 "assigned_rate_limits": { 00:14:43.810 "rw_ios_per_sec": 0, 00:14:43.810 "rw_mbytes_per_sec": 0, 00:14:43.810 "r_mbytes_per_sec": 0, 00:14:43.810 "w_mbytes_per_sec": 0 00:14:43.810 }, 00:14:43.810 "claimed": false, 00:14:43.810 "zoned": false, 00:14:43.810 "supported_io_types": { 00:14:43.810 "read": true, 00:14:43.810 "write": true, 00:14:43.810 "unmap": true, 00:14:43.810 "write_zeroes": true, 00:14:43.810 "flush": false, 00:14:43.810 "reset": true, 00:14:43.810 "compare": false, 00:14:43.810 "compare_and_write": false, 00:14:43.810 "abort": false, 00:14:43.810 "nvme_admin": false, 00:14:43.810 "nvme_io": false 00:14:43.810 }, 00:14:43.810 "driver_specific": { 00:14:43.810 "lvol": { 00:14:43.810 "lvol_store_uuid": "ac6b55e8-1535-4ce2-91fb-776ea3f7f627", 00:14:43.810 "base_bdev": "nvme0n1", 00:14:43.810 "thin_provision": true, 00:14:43.810 "snapshot": false, 00:14:43.810 "clone": false, 00:14:43.810 "esnap_clone": false 00:14:43.810 } 00:14:43.810 } 00:14:43.810 } 00:14:43.810 ]' 00:14:43.810 20:07:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:43.810 20:07:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:43.810 20:07:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:43.810 20:07:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:43.810 20:07:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:43.810 20:07:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:43.810 20:07:51 -- ftl/common.sh@41 -- # local base_size=5171 00:14:43.810 20:07:51 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:43.810 20:07:51 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:44.069 20:07:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:44.069 20:07:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:44.069 20:07:51 -- ftl/common.sh@48 -- # get_bdev_size ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:44.069 20:07:51 -- common/autotest_common.sh@1367 -- # local bdev_name=ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:44.069 20:07:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:44.069 20:07:51 -- common/autotest_common.sh@1369 -- # local bs 00:14:44.069 20:07:51 -- common/autotest_common.sh@1370 -- # local nb 00:14:44.069 20:07:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:44.327 20:07:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:44.327 { 00:14:44.327 "name": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:44.327 "aliases": [ 00:14:44.327 "lvs/nvme0n1p0" 00:14:44.327 ], 00:14:44.327 "product_name": "Logical Volume", 00:14:44.327 "block_size": 4096, 00:14:44.327 "num_blocks": 26476544, 00:14:44.327 "uuid": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:44.327 "assigned_rate_limits": { 00:14:44.327 "rw_ios_per_sec": 0, 00:14:44.327 "rw_mbytes_per_sec": 0, 00:14:44.327 "r_mbytes_per_sec": 0, 00:14:44.327 "w_mbytes_per_sec": 0 00:14:44.327 }, 00:14:44.327 "claimed": false, 00:14:44.327 "zoned": false, 00:14:44.327 "supported_io_types": { 00:14:44.327 "read": true, 00:14:44.327 "write": true, 00:14:44.327 "unmap": true, 00:14:44.327 "write_zeroes": true, 00:14:44.327 "flush": false, 00:14:44.327 "reset": true, 00:14:44.327 "compare": false, 00:14:44.327 "compare_and_write": false, 00:14:44.327 "abort": false, 00:14:44.327 "nvme_admin": false, 00:14:44.327 "nvme_io": false 00:14:44.327 }, 00:14:44.327 "driver_specific": { 00:14:44.327 "lvol": { 00:14:44.327 "lvol_store_uuid": "ac6b55e8-1535-4ce2-91fb-776ea3f7f627", 00:14:44.327 "base_bdev": "nvme0n1", 00:14:44.327 "thin_provision": true, 00:14:44.327 "snapshot": false, 00:14:44.327 "clone": false, 00:14:44.327 "esnap_clone": false 00:14:44.327 } 00:14:44.327 } 00:14:44.327 } 00:14:44.327 ]' 00:14:44.327 20:07:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:44.327 20:07:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:44.327 20:07:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:44.327 20:07:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:44.327 20:07:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:44.327 20:07:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:44.327 20:07:51 -- ftl/common.sh@48 -- # cache_size=5171 00:14:44.327 20:07:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:44.586 20:07:52 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:44.586 20:07:52 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:44.586 20:07:52 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:44.586 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:44.586 20:07:52 -- ftl/fio.sh@56 -- # get_bdev_size ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:44.586 20:07:52 -- common/autotest_common.sh@1367 -- # local bdev_name=ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:44.586 20:07:52 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:44.586 20:07:52 -- common/autotest_common.sh@1369 -- # local bs 00:14:44.586 20:07:52 -- common/autotest_common.sh@1370 -- # local nb 00:14:44.586 20:07:52 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c 00:14:44.844 20:07:52 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:44.844 { 00:14:44.844 "name": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:44.844 "aliases": [ 00:14:44.844 "lvs/nvme0n1p0" 00:14:44.844 ], 00:14:44.844 "product_name": "Logical Volume", 00:14:44.844 "block_size": 4096, 00:14:44.844 "num_blocks": 26476544, 00:14:44.844 "uuid": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:44.844 "assigned_rate_limits": { 00:14:44.844 "rw_ios_per_sec": 0, 00:14:44.844 "rw_mbytes_per_sec": 0, 00:14:44.844 "r_mbytes_per_sec": 0, 00:14:44.844 "w_mbytes_per_sec": 0 00:14:44.844 }, 00:14:44.844 "claimed": false, 00:14:44.844 "zoned": false, 00:14:44.844 "supported_io_types": { 00:14:44.844 "read": true, 00:14:44.844 "write": true, 00:14:44.844 "unmap": true, 00:14:44.844 "write_zeroes": true, 00:14:44.844 "flush": false, 00:14:44.844 "reset": true, 00:14:44.844 "compare": false, 00:14:44.844 "compare_and_write": false, 00:14:44.844 "abort": false, 00:14:44.844 "nvme_admin": false, 00:14:44.844 "nvme_io": false 00:14:44.844 }, 00:14:44.844 "driver_specific": { 00:14:44.844 "lvol": { 00:14:44.844 "lvol_store_uuid": "ac6b55e8-1535-4ce2-91fb-776ea3f7f627", 00:14:44.844 "base_bdev": "nvme0n1", 00:14:44.844 "thin_provision": true, 00:14:44.844 "snapshot": false, 00:14:44.844 "clone": false, 00:14:44.844 "esnap_clone": false 00:14:44.844 } 00:14:44.844 } 00:14:44.844 } 00:14:44.844 ]' 00:14:44.844 20:07:52 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:44.844 20:07:52 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:44.844 20:07:52 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:44.844 20:07:52 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:44.844 20:07:52 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:44.844 20:07:52 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:44.844 20:07:52 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:44.844 20:07:52 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:44.844 20:07:52 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c -c nvc0n1p0 --l2p_dram_limit 60 00:14:45.104 [2024-12-16 20:07:52.504534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.504577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:45.104 [2024-12-16 20:07:52.504591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:14:45.104 [2024-12-16 20:07:52.504598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.504652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.504660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:45.104 [2024-12-16 20:07:52.504668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:45.104 [2024-12-16 20:07:52.504674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.504698] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:45.104 [2024-12-16 20:07:52.505239] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:45.104 [2024-12-16 20:07:52.505255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.505261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:45.104 [2024-12-16 20:07:52.505270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:14:45.104 [2024-12-16 20:07:52.505276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.505324] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 43fad67a-1668-4e30-bb3b-96ef56ae4560 00:14:45.104 [2024-12-16 20:07:52.506576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.506601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:45.104 [2024-12-16 20:07:52.506610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:14:45.104 [2024-12-16 20:07:52.506619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.513262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.513293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:45.104 [2024-12-16 20:07:52.513311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.555 ms 00:14:45.104 [2024-12-16 20:07:52.513320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.513398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.513408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:45.104 [2024-12-16 20:07:52.513415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:14:45.104 [2024-12-16 20:07:52.513423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.513468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.513478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:45.104 [2024-12-16 20:07:52.513484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:45.104 [2024-12-16 20:07:52.513494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.513524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:45.104 [2024-12-16 20:07:52.516813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.516839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:45.104 [2024-12-16 20:07:52.516849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:14:45.104 [2024-12-16 20:07:52.516855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.516896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.516903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:45.104 [2024-12-16 20:07:52.516911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:45.104 [2024-12-16 20:07:52.516917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.516946] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:45.104 [2024-12-16 20:07:52.517037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:45.104 [2024-12-16 20:07:52.517051] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:45.104 [2024-12-16 20:07:52.517059] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:45.104 [2024-12-16 20:07:52.517068] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:45.104 [2024-12-16 20:07:52.517075] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:45.104 [2024-12-16 20:07:52.517083] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:45.104 [2024-12-16 20:07:52.517089] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:45.104 [2024-12-16 20:07:52.517099] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:45.104 [2024-12-16 20:07:52.517110] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:45.104 [2024-12-16 20:07:52.517118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.517123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:45.104 [2024-12-16 20:07:52.517131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:14:45.104 [2024-12-16 20:07:52.517136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.104 [2024-12-16 20:07:52.517197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.104 [2024-12-16 20:07:52.517204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:45.105 [2024-12-16 20:07:52.517212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:45.105 [2024-12-16 20:07:52.517217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.517312] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:45.105 [2024-12-16 20:07:52.517321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:45.105 [2024-12-16 20:07:52.517330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:45.105 [2024-12-16 20:07:52.517350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:45.105 [2024-12-16 20:07:52.517369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:45.105 [2024-12-16 20:07:52.517381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:45.105 [2024-12-16 20:07:52.517387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:45.105 [2024-12-16 20:07:52.517396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:45.105 [2024-12-16 20:07:52.517401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:45.105 [2024-12-16 20:07:52.517408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:45.105 [2024-12-16 20:07:52.517414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:45.105 [2024-12-16 20:07:52.517428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:45.105 [2024-12-16 20:07:52.517434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:45.105 [2024-12-16 20:07:52.517446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:45.105 [2024-12-16 20:07:52.517451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:45.105 [2024-12-16 20:07:52.517466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:45.105 [2024-12-16 20:07:52.517484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:45.105 [2024-12-16 20:07:52.517501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:45.105 [2024-12-16 20:07:52.517521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:45.105 [2024-12-16 20:07:52.517551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:45.105 [2024-12-16 20:07:52.517564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:45.105 [2024-12-16 20:07:52.517570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:45.105 [2024-12-16 20:07:52.517575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:45.105 [2024-12-16 20:07:52.517581] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:45.105 [2024-12-16 20:07:52.517588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:45.105 [2024-12-16 20:07:52.517595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.105 [2024-12-16 20:07:52.517608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:45.105 [2024-12-16 20:07:52.517613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:45.105 [2024-12-16 20:07:52.517620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:45.105 [2024-12-16 20:07:52.517625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:45.105 [2024-12-16 20:07:52.517634] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:45.105 [2024-12-16 20:07:52.517639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:45.105 [2024-12-16 20:07:52.517647] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:45.105 [2024-12-16 20:07:52.517655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:45.105 [2024-12-16 20:07:52.517664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:45.105 [2024-12-16 20:07:52.517670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:45.105 [2024-12-16 20:07:52.517677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:45.105 [2024-12-16 20:07:52.517684] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:45.105 [2024-12-16 20:07:52.517691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:45.105 [2024-12-16 20:07:52.517697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:45.105 [2024-12-16 20:07:52.517704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:45.105 [2024-12-16 20:07:52.517709] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:45.105 [2024-12-16 20:07:52.517716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:45.105 [2024-12-16 20:07:52.517721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:45.105 [2024-12-16 20:07:52.517729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:45.105 [2024-12-16 20:07:52.517736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:45.105 [2024-12-16 20:07:52.517744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:45.105 [2024-12-16 20:07:52.517750] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:45.105 [2024-12-16 20:07:52.517759] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:45.105 [2024-12-16 20:07:52.517767] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:45.105 [2024-12-16 20:07:52.517775] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:45.105 [2024-12-16 20:07:52.517780] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:45.105 [2024-12-16 20:07:52.517788] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:45.105 [2024-12-16 20:07:52.517794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.517801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:45.105 [2024-12-16 20:07:52.517807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:14:45.105 [2024-12-16 20:07:52.517814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.531617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.531777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:45.105 [2024-12-16 20:07:52.531792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.736 ms 00:14:45.105 [2024-12-16 20:07:52.531800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.531880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.531891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:45.105 [2024-12-16 20:07:52.531899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:14:45.105 [2024-12-16 20:07:52.531908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.559968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.559997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:45.105 [2024-12-16 20:07:52.560006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.011 ms 00:14:45.105 [2024-12-16 20:07:52.560015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.560046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.560055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:45.105 [2024-12-16 20:07:52.560062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:45.105 [2024-12-16 20:07:52.560070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.560494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.560518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:45.105 [2024-12-16 20:07:52.560526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:14:45.105 [2024-12-16 20:07:52.560533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.560642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.560652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:45.105 [2024-12-16 20:07:52.560658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:14:45.105 [2024-12-16 20:07:52.560665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.105 [2024-12-16 20:07:52.598008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.105 [2024-12-16 20:07:52.598062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:45.106 [2024-12-16 20:07:52.598081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.315 ms 00:14:45.106 [2024-12-16 20:07:52.598097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.106 [2024-12-16 20:07:52.608330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:45.106 [2024-12-16 20:07:52.623584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.106 [2024-12-16 20:07:52.623729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:45.106 [2024-12-16 20:07:52.623745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.319 ms 00:14:45.106 [2024-12-16 20:07:52.623752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.106 [2024-12-16 20:07:52.675719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.106 [2024-12-16 20:07:52.675751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:45.106 [2024-12-16 20:07:52.675761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.929 ms 00:14:45.106 [2024-12-16 20:07:52.675768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.106 [2024-12-16 20:07:52.675810] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:45.106 [2024-12-16 20:07:52.675819] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:48.400 [2024-12-16 20:07:55.525257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.525509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:48.400 [2024-12-16 20:07:55.525538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2849.432 ms 00:14:48.400 [2024-12-16 20:07:55.525547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.525988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.526016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:48.400 [2024-12-16 20:07:55.526030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:14:48.400 [2024-12-16 20:07:55.526040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.549552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.549588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:48.400 [2024-12-16 20:07:55.549603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.441 ms 00:14:48.400 [2024-12-16 20:07:55.549611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.573167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.573347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:48.400 [2024-12-16 20:07:55.573373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.512 ms 00:14:48.400 [2024-12-16 20:07:55.573381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.573707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.573720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:48.400 [2024-12-16 20:07:55.573731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:14:48.400 [2024-12-16 20:07:55.573739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.636711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.636862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:48.400 [2024-12-16 20:07:55.636887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.927 ms 00:14:48.400 [2024-12-16 20:07:55.636896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.662512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.662555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:48.400 [2024-12-16 20:07:55.662573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.330 ms 00:14:48.400 [2024-12-16 20:07:55.662582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.667148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.667182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:48.400 [2024-12-16 20:07:55.667196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.517 ms 00:14:48.400 [2024-12-16 20:07:55.667204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.691040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.691191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:48.400 [2024-12-16 20:07:55.691214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.794 ms 00:14:48.400 [2024-12-16 20:07:55.691221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.691284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.691293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:48.400 [2024-12-16 20:07:55.691318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:14:48.400 [2024-12-16 20:07:55.691325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.691437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.400 [2024-12-16 20:07:55.691447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:48.400 [2024-12-16 20:07:55.691459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:14:48.400 [2024-12-16 20:07:55.691466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.400 [2024-12-16 20:07:55.692855] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3187.843 ms, result 0 00:14:48.400 { 00:14:48.400 "name": "ftl0", 00:14:48.400 "uuid": "43fad67a-1668-4e30-bb3b-96ef56ae4560" 00:14:48.400 } 00:14:48.400 20:07:55 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:48.400 20:07:55 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:48.400 20:07:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.400 20:07:55 -- common/autotest_common.sh@899 -- # local i 00:14:48.400 20:07:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.400 20:07:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.400 20:07:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.400 20:07:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:48.660 [ 00:14:48.660 { 00:14:48.660 "name": "ftl0", 00:14:48.660 "aliases": [ 00:14:48.660 "43fad67a-1668-4e30-bb3b-96ef56ae4560" 00:14:48.660 ], 00:14:48.660 "product_name": "FTL disk", 00:14:48.660 "block_size": 4096, 00:14:48.660 "num_blocks": 20971520, 00:14:48.660 "uuid": "43fad67a-1668-4e30-bb3b-96ef56ae4560", 00:14:48.660 "assigned_rate_limits": { 00:14:48.660 "rw_ios_per_sec": 0, 00:14:48.660 "rw_mbytes_per_sec": 0, 00:14:48.660 "r_mbytes_per_sec": 0, 00:14:48.660 "w_mbytes_per_sec": 0 00:14:48.660 }, 00:14:48.660 "claimed": false, 00:14:48.660 "zoned": false, 00:14:48.660 "supported_io_types": { 00:14:48.660 "read": true, 00:14:48.660 "write": true, 00:14:48.660 "unmap": true, 00:14:48.660 "write_zeroes": true, 00:14:48.660 "flush": true, 00:14:48.660 "reset": false, 00:14:48.660 "compare": false, 00:14:48.660 "compare_and_write": false, 00:14:48.660 "abort": false, 00:14:48.660 "nvme_admin": false, 00:14:48.660 "nvme_io": false 00:14:48.660 }, 00:14:48.660 "driver_specific": { 00:14:48.660 "ftl": { 00:14:48.660 "base_bdev": "ef2a1732-87cd-4c06-8114-3e7a5ac4aa3c", 00:14:48.660 "cache": "nvc0n1p0" 00:14:48.660 } 00:14:48.660 } 00:14:48.660 } 00:14:48.660 ] 00:14:48.660 20:07:56 -- common/autotest_common.sh@905 -- # return 0 00:14:48.660 20:07:56 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:48.660 20:07:56 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:48.660 20:07:56 -- ftl/fio.sh@70 -- # echo ']}' 00:14:48.660 20:07:56 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:48.919 [2024-12-16 20:07:56.457254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.457292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:48.919 [2024-12-16 20:07:56.457311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:48.919 [2024-12-16 20:07:56.457319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.457349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:48.919 [2024-12-16 20:07:56.459480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.459505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:48.919 [2024-12-16 20:07:56.459518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:14:48.919 [2024-12-16 20:07:56.459524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.459917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.459930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:48.919 [2024-12-16 20:07:56.459939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:14:48.919 [2024-12-16 20:07:56.459945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.462422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.462441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:48.919 [2024-12-16 20:07:56.462449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:14:48.919 [2024-12-16 20:07:56.462456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.467177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.467199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:48.919 [2024-12-16 20:07:56.467208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:14:48.919 [2024-12-16 20:07:56.467215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.484915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.484943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:48.919 [2024-12-16 20:07:56.484953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.617 ms 00:14:48.919 [2024-12-16 20:07:56.484959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.497637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.497663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:48.919 [2024-12-16 20:07:56.497687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.634 ms 00:14:48.919 [2024-12-16 20:07:56.497694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.497855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.497864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:48.919 [2024-12-16 20:07:56.497874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:14:48.919 [2024-12-16 20:07:56.497881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.515881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.515906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:48.919 [2024-12-16 20:07:56.515916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.974 ms 00:14:48.919 [2024-12-16 20:07:56.515921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.533763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.533788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:48.919 [2024-12-16 20:07:56.533797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.802 ms 00:14:48.919 [2024-12-16 20:07:56.533803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.919 [2024-12-16 20:07:56.551338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.919 [2024-12-16 20:07:56.551364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:48.919 [2024-12-16 20:07:56.551385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.496 ms 00:14:48.919 [2024-12-16 20:07:56.551391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.179 [2024-12-16 20:07:56.568832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.179 [2024-12-16 20:07:56.568951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:49.179 [2024-12-16 20:07:56.568968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.364 ms 00:14:49.179 [2024-12-16 20:07:56.568973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.179 [2024-12-16 20:07:56.569008] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:49.179 [2024-12-16 20:07:56.569020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:49.179 [2024-12-16 20:07:56.569388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:49.180 [2024-12-16 20:07:56.569732] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:49.180 [2024-12-16 20:07:56.569741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 43fad67a-1668-4e30-bb3b-96ef56ae4560 00:14:49.180 [2024-12-16 20:07:56.569747] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:49.180 [2024-12-16 20:07:56.569754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:49.180 [2024-12-16 20:07:56.569760] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:49.180 [2024-12-16 20:07:56.569767] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:49.180 [2024-12-16 20:07:56.569773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:49.180 [2024-12-16 20:07:56.569780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:49.180 [2024-12-16 20:07:56.569786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:49.180 [2024-12-16 20:07:56.569792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:49.180 [2024-12-16 20:07:56.569797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:49.180 [2024-12-16 20:07:56.569805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.180 [2024-12-16 20:07:56.569813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:49.180 [2024-12-16 20:07:56.569821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:14:49.180 [2024-12-16 20:07:56.569826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.580100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.180 [2024-12-16 20:07:56.580204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:49.180 [2024-12-16 20:07:56.580220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.237 ms 00:14:49.180 [2024-12-16 20:07:56.580226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.580411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.180 [2024-12-16 20:07:56.580418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:49.180 [2024-12-16 20:07:56.580427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:14:49.180 [2024-12-16 20:07:56.580432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.617479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.617508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:49.180 [2024-12-16 20:07:56.617519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.617526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.617587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.617595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:49.180 [2024-12-16 20:07:56.617603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.617609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.617683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.617691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:49.180 [2024-12-16 20:07:56.617699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.617705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.617731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.617740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:49.180 [2024-12-16 20:07:56.617749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.617755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.687850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.687886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:49.180 [2024-12-16 20:07:56.687897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.687904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.711570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.711597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:49.180 [2024-12-16 20:07:56.711607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.711614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.711680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.711687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:49.180 [2024-12-16 20:07:56.711695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.180 [2024-12-16 20:07:56.711701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.180 [2024-12-16 20:07:56.711756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.180 [2024-12-16 20:07:56.711764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:49.180 [2024-12-16 20:07:56.711774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.181 [2024-12-16 20:07:56.711780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.181 [2024-12-16 20:07:56.711869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.181 [2024-12-16 20:07:56.711877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:49.181 [2024-12-16 20:07:56.711886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.181 [2024-12-16 20:07:56.711892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.181 [2024-12-16 20:07:56.711936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.181 [2024-12-16 20:07:56.711943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:49.181 [2024-12-16 20:07:56.711950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.181 [2024-12-16 20:07:56.711958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.181 [2024-12-16 20:07:56.711997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.181 [2024-12-16 20:07:56.712005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:49.181 [2024-12-16 20:07:56.712012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.181 [2024-12-16 20:07:56.712018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.181 [2024-12-16 20:07:56.712065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.181 [2024-12-16 20:07:56.712074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:49.181 [2024-12-16 20:07:56.712083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.181 [2024-12-16 20:07:56.712089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.181 [2024-12-16 20:07:56.712245] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 254.954 ms, result 0 00:14:49.181 true 00:14:49.181 20:07:56 -- ftl/fio.sh@75 -- # killprocess 70569 00:14:49.181 20:07:56 -- common/autotest_common.sh@936 -- # '[' -z 70569 ']' 00:14:49.181 20:07:56 -- common/autotest_common.sh@940 -- # kill -0 70569 00:14:49.181 20:07:56 -- common/autotest_common.sh@941 -- # uname 00:14:49.181 20:07:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:49.181 20:07:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70569 00:14:49.181 20:07:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:49.181 killing process with pid 70569 00:14:49.181 20:07:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:49.181 20:07:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70569' 00:14:49.181 20:07:56 -- common/autotest_common.sh@955 -- # kill 70569 00:14:49.181 20:07:56 -- common/autotest_common.sh@960 -- # wait 70569 00:14:54.465 20:08:01 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:54.465 20:08:01 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:54.465 20:08:01 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:54.465 20:08:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.465 20:08:01 -- common/autotest_common.sh@10 -- # set +x 00:14:54.465 20:08:01 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:54.465 20:08:01 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:54.465 20:08:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:54.465 20:08:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.465 20:08:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:54.465 20:08:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.465 20:08:01 -- common/autotest_common.sh@1330 -- # shift 00:14:54.465 20:08:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:54.465 20:08:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.465 20:08:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.465 20:08:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:54.465 20:08:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:54.465 20:08:01 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:54.465 20:08:01 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:54.465 20:08:01 -- common/autotest_common.sh@1336 -- # break 00:14:54.466 20:08:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:54.466 20:08:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:54.466 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:54.466 fio-3.35 00:14:54.466 Starting 1 thread 00:14:59.747 00:14:59.747 test: (groupid=0, jobs=1): err= 0: pid=70770: Mon Dec 16 20:08:06 2024 00:14:59.747 read: IOPS=1069, BW=71.0MiB/s (74.5MB/s)(255MiB/3583msec) 00:14:59.747 slat (nsec): min=4258, max=32705, avg=6667.71, stdev=3142.82 00:14:59.747 clat (usec): min=235, max=2935, avg=422.15, stdev=195.41 00:14:59.747 lat (usec): min=239, max=2944, avg=428.82, stdev=197.48 00:14:59.747 clat percentiles (usec): 00:14:59.747 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 310], 00:14:59.747 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 334], 00:14:59.747 | 70.00th=[ 416], 80.00th=[ 529], 90.00th=[ 783], 95.00th=[ 906], 00:14:59.747 | 99.00th=[ 979], 99.50th=[ 1020], 99.90th=[ 1237], 99.95th=[ 1303], 00:14:59.747 | 99.99th=[ 2933] 00:14:59.747 write: IOPS=1077, BW=71.5MiB/s (75.0MB/s)(256MiB/3580msec); 0 zone resets 00:14:59.747 slat (nsec): min=14298, max=58685, avg=20062.10, stdev=4913.25 00:14:59.747 clat (usec): min=265, max=1718, avg=469.71, stdev=226.50 00:14:59.747 lat (usec): min=282, max=1741, avg=489.77, stdev=229.64 00:14:59.747 clat percentiles (usec): 00:14:59.747 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 338], 00:14:59.747 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 371], 00:14:59.747 | 70.00th=[ 445], 80.00th=[ 603], 90.00th=[ 898], 95.00th=[ 979], 00:14:59.747 | 99.00th=[ 1172], 99.50th=[ 1385], 99.90th=[ 1598], 99.95th=[ 1614], 00:14:59.747 | 99.99th=[ 1713] 00:14:59.747 bw ( KiB/s): min=36584, max=102544, per=99.47%, avg=72857.14, stdev=25913.59, samples=7 00:14:59.747 iops : min= 538, max= 1508, avg=1071.43, stdev=381.08, samples=7 00:14:59.747 lat (usec) : 250=0.01%, 500=75.35%, 750=12.51%, 1000=10.21% 00:14:59.747 lat (msec) : 2=1.90%, 4=0.01% 00:14:59.748 cpu : usr=99.30%, sys=0.06%, ctx=4, majf=0, minf=1318 00:14:59.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.748 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.748 00:14:59.748 Run status group 0 (all jobs): 00:14:59.748 READ: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=255MiB (267MB), run=3583-3583msec 00:14:59.748 WRITE: bw=71.5MiB/s (75.0MB/s), 71.5MiB/s-71.5MiB/s (75.0MB/s-75.0MB/s), io=256MiB (269MB), run=3580-3580msec 00:15:00.382 ----------------------------------------------------- 00:15:00.382 Suppressions used: 00:15:00.382 count bytes template 00:15:00.382 1 5 /usr/src/fio/parse.c 00:15:00.382 1 8 libtcmalloc_minimal.so 00:15:00.382 1 904 libcrypto.so 00:15:00.382 ----------------------------------------------------- 00:15:00.382 00:15:00.382 20:08:07 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:00.382 20:08:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.382 20:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:00.382 20:08:07 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:00.382 20:08:07 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:00.382 20:08:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.382 20:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:00.382 20:08:07 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:00.382 20:08:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:00.382 20:08:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:00.382 20:08:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:00.382 20:08:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:00.382 20:08:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.382 20:08:07 -- common/autotest_common.sh@1330 -- # shift 00:15:00.382 20:08:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:00.382 20:08:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.382 20:08:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:00.382 20:08:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.382 20:08:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:00.382 20:08:08 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:00.382 20:08:08 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:00.382 20:08:08 -- common/autotest_common.sh@1336 -- # break 00:15:00.382 20:08:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:00.382 20:08:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:00.642 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:00.642 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:00.642 fio-3.35 00:15:00.642 Starting 2 threads 00:15:27.204 00:15:27.204 first_half: (groupid=0, jobs=1): err= 0: pid=70867: Mon Dec 16 20:08:32 2024 00:15:27.204 read: IOPS=2864, BW=11.2MiB/s (11.7MB/s)(255MiB/22777msec) 00:15:27.204 slat (nsec): min=3027, max=30341, avg=4956.21, stdev=1352.65 00:15:27.204 clat (usec): min=628, max=458035, avg=34507.95, stdev=19347.53 00:15:27.204 lat (usec): min=632, max=458039, avg=34512.91, stdev=19347.62 00:15:27.204 clat percentiles (msec): 00:15:27.204 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:15:27.204 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:15:27.204 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 41], 95.00th=[ 51], 00:15:27.204 | 99.00th=[ 131], 99.50th=[ 146], 99.90th=[ 249], 99.95th=[ 380], 00:15:27.204 | 99.99th=[ 443] 00:15:27.204 write: IOPS=3684, BW=14.4MiB/s (15.1MB/s)(256MiB/17788msec); 0 zone resets 00:15:27.204 slat (usec): min=3, max=3437, avg= 6.74, stdev=23.18 00:15:27.204 clat (usec): min=328, max=83512, avg=10092.81, stdev=17131.90 00:15:27.204 lat (usec): min=336, max=83519, avg=10099.55, stdev=17132.24 00:15:27.204 clat percentiles (usec): 00:15:27.204 | 1.00th=[ 685], 5.00th=[ 930], 10.00th=[ 1156], 20.00th=[ 1401], 00:15:27.204 | 30.00th=[ 2278], 40.00th=[ 3589], 50.00th=[ 4555], 60.00th=[ 5276], 00:15:27.204 | 70.00th=[ 6194], 80.00th=[12518], 90.00th=[18744], 95.00th=[67634], 00:15:27.204 | 99.00th=[77071], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:15:27.204 | 99.99th=[82314] 00:15:27.204 bw ( KiB/s): min= 40, max=46488, per=78.57%, avg=20970.60, stdev=14552.11, samples=25 00:15:27.204 iops : min= 10, max=11622, avg=5242.52, stdev=3637.97, samples=25 00:15:27.204 lat (usec) : 500=0.03%, 750=1.08%, 1000=1.94% 00:15:27.204 lat (msec) : 2=11.30%, 4=7.89%, 10=16.23%, 20=8.13%, 50=47.70% 00:15:27.204 lat (msec) : 100=4.69%, 250=0.97%, 500=0.05% 00:15:27.204 cpu : usr=99.35%, sys=0.22%, ctx=41, majf=0, minf=5609 00:15:27.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.204 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:27.204 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:27.204 second_half: (groupid=0, jobs=1): err= 0: pid=70868: Mon Dec 16 20:08:32 2024 00:15:27.204 read: IOPS=2842, BW=11.1MiB/s (11.6MB/s)(255MiB/22958msec) 00:15:27.204 slat (nsec): min=3020, max=77643, avg=5315.29, stdev=1277.91 00:15:27.204 clat (usec): min=780, max=467576, avg=33925.01, stdev=20459.47 00:15:27.204 lat (usec): min=786, max=467582, avg=33930.33, stdev=20459.62 00:15:27.204 clat percentiles (msec): 00:15:27.204 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 30], 00:15:27.204 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:15:27.204 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 47], 00:15:27.204 | 99.00th=[ 140], 99.50th=[ 159], 99.90th=[ 234], 99.95th=[ 338], 00:15:27.204 | 99.99th=[ 464] 00:15:27.204 write: IOPS=3336, BW=13.0MiB/s (13.7MB/s)(256MiB/19643msec); 0 zone resets 00:15:27.204 slat (usec): min=3, max=1446, avg= 6.55, stdev= 6.71 00:15:27.204 clat (usec): min=335, max=83403, avg=11041.25, stdev=18081.74 00:15:27.204 lat (usec): min=345, max=83408, avg=11047.79, stdev=18081.79 00:15:27.204 clat percentiles (usec): 00:15:27.204 | 1.00th=[ 668], 5.00th=[ 832], 10.00th=[ 1045], 20.00th=[ 1319], 00:15:27.204 | 30.00th=[ 2114], 40.00th=[ 3064], 50.00th=[ 4113], 60.00th=[ 5211], 00:15:27.204 | 70.00th=[ 7046], 80.00th=[15139], 90.00th=[28443], 95.00th=[69731], 00:15:27.204 | 99.00th=[77071], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:15:27.204 | 99.99th=[83362] 00:15:27.204 bw ( KiB/s): min= 881, max=49664, per=85.40%, avg=22794.35, stdev=14098.16, samples=23 00:15:27.204 iops : min= 220, max=12416, avg=5698.52, stdev=3524.54, samples=23 00:15:27.204 lat (usec) : 500=0.03%, 750=1.38%, 1000=2.96% 00:15:27.204 lat (msec) : 2=10.27%, 4=10.05%, 10=13.40%, 20=7.76%, 50=48.54% 00:15:27.204 lat (msec) : 100=4.52%, 250=1.05%, 500=0.04% 00:15:27.204 cpu : usr=99.44%, sys=0.12%, ctx=261, majf=0, minf=5512 00:15:27.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:27.204 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:27.204 00:15:27.204 Run status group 0 (all jobs): 00:15:27.204 READ: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=510MiB (534MB), run=22777-22958msec 00:15:27.204 WRITE: bw=26.1MiB/s (27.3MB/s), 13.0MiB/s-14.4MiB/s (13.7MB/s-15.1MB/s), io=512MiB (537MB), run=17788-19643msec 00:15:27.204 ----------------------------------------------------- 00:15:27.204 Suppressions used: 00:15:27.204 count bytes template 00:15:27.204 2 10 /usr/src/fio/parse.c 00:15:27.204 2 192 /usr/src/fio/iolog.c 00:15:27.204 1 8 libtcmalloc_minimal.so 00:15:27.204 1 904 libcrypto.so 00:15:27.204 ----------------------------------------------------- 00:15:27.204 00:15:27.204 20:08:33 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:27.204 20:08:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.204 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.204 20:08:33 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:27.204 20:08:33 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:27.204 20:08:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.204 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.204 20:08:33 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:27.204 20:08:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:27.204 20:08:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:27.204 20:08:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.204 20:08:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:27.204 20:08:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.204 20:08:33 -- common/autotest_common.sh@1330 -- # shift 00:15:27.204 20:08:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:27.204 20:08:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.204 20:08:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.204 20:08:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:27.204 20:08:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:27.204 20:08:33 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.204 20:08:33 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.204 20:08:33 -- common/autotest_common.sh@1336 -- # break 00:15:27.204 20:08:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.205 20:08:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:27.205 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:27.205 fio-3.35 00:15:27.205 Starting 1 thread 00:15:39.416 00:15:39.416 test: (groupid=0, jobs=1): err= 0: pid=71175: Mon Dec 16 20:08:46 2024 00:15:39.416 read: IOPS=8378, BW=32.7MiB/s (34.3MB/s)(255MiB/7782msec) 00:15:39.416 slat (nsec): min=2971, max=20120, avg=4720.82, stdev=1090.76 00:15:39.416 clat (usec): min=515, max=35610, avg=15267.61, stdev=2025.21 00:15:39.416 lat (usec): min=519, max=35613, avg=15272.33, stdev=2025.24 00:15:39.416 clat percentiles (usec): 00:15:39.416 | 1.00th=[13173], 5.00th=[13435], 10.00th=[13566], 20.00th=[13960], 00:15:39.416 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:15:39.416 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16057], 95.00th=[17957], 00:15:39.416 | 99.00th=[23987], 99.50th=[25560], 99.90th=[31851], 99.95th=[33817], 00:15:39.416 | 99.99th=[35390] 00:15:39.416 write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(256MiB/3763msec); 0 zone resets 00:15:39.416 slat (usec): min=3, max=125, avg= 6.07, stdev= 2.18 00:15:39.416 clat (usec): min=420, max=49915, avg=7311.25, stdev=9374.80 00:15:39.416 lat (usec): min=425, max=49921, avg=7317.33, stdev=9374.81 00:15:39.416 clat percentiles (usec): 00:15:39.416 | 1.00th=[ 578], 5.00th=[ 701], 10.00th=[ 791], 20.00th=[ 914], 00:15:39.416 | 30.00th=[ 1057], 40.00th=[ 1434], 50.00th=[ 4686], 60.00th=[ 5473], 00:15:39.416 | 70.00th=[ 6456], 80.00th=[ 7832], 90.00th=[25297], 95.00th=[27395], 00:15:39.416 | 99.00th=[35390], 99.50th=[39584], 99.90th=[43254], 99.95th=[44827], 00:15:39.416 | 99.99th=[48497] 00:15:39.416 bw ( KiB/s): min=24296, max=90384, per=94.08%, avg=65536.00, stdev=20778.80, samples=8 00:15:39.416 iops : min= 6074, max=22596, avg=16384.00, stdev=5194.70, samples=8 00:15:39.416 lat (usec) : 500=0.04%, 750=3.63%, 1000=9.76% 00:15:39.416 lat (msec) : 2=7.20%, 4=1.36%, 10=20.05%, 20=48.12%, 50=9.84% 00:15:39.416 cpu : usr=99.32%, sys=0.21%, ctx=16, majf=0, minf=5567 00:15:39.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:39.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.416 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.416 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.416 00:15:39.416 Run status group 0 (all jobs): 00:15:39.416 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=255MiB (267MB), run=7782-7782msec 00:15:39.416 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=256MiB (268MB), run=3763-3763msec 00:15:40.792 ----------------------------------------------------- 00:15:40.793 Suppressions used: 00:15:40.793 count bytes template 00:15:40.793 1 5 /usr/src/fio/parse.c 00:15:40.793 2 192 /usr/src/fio/iolog.c 00:15:40.793 1 8 libtcmalloc_minimal.so 00:15:40.793 1 904 libcrypto.so 00:15:40.793 ----------------------------------------------------- 00:15:40.793 00:15:40.793 20:08:48 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:40.793 20:08:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.793 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.793 20:08:48 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:40.793 Remove shared memory files 00:15:40.793 20:08:48 -- ftl/fio.sh@85 -- # remove_shm 00:15:40.793 20:08:48 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:40.793 20:08:48 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:40.793 20:08:48 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:40.793 20:08:48 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56154 /dev/shm/spdk_tgt_trace.pid69464 00:15:40.793 20:08:48 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:40.793 20:08:48 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:40.793 ************************************ 00:15:40.793 END TEST ftl_fio_basic 00:15:40.793 ************************************ 00:15:40.793 00:15:40.793 real 0m59.854s 00:15:40.793 user 2m3.900s 00:15:40.793 sys 0m11.334s 00:15:40.793 20:08:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:40.793 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.793 20:08:48 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:40.793 20:08:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:40.793 20:08:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.793 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.793 ************************************ 00:15:40.793 START TEST ftl_bdevperf 00:15:40.793 ************************************ 00:15:40.793 20:08:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:40.793 * Looking for test storage... 00:15:40.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.793 20:08:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:40.793 20:08:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:40.793 20:08:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:40.793 20:08:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:40.793 20:08:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:40.793 20:08:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:40.793 20:08:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:40.793 20:08:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:40.793 20:08:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:40.793 20:08:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.793 20:08:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:40.793 20:08:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:40.793 20:08:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:40.793 20:08:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:40.793 20:08:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:40.793 20:08:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:40.793 20:08:48 -- scripts/common.sh@344 -- # : 1 00:15:40.793 20:08:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:40.793 20:08:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.793 20:08:48 -- scripts/common.sh@364 -- # decimal 1 00:15:40.793 20:08:48 -- scripts/common.sh@352 -- # local d=1 00:15:40.793 20:08:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.793 20:08:48 -- scripts/common.sh@354 -- # echo 1 00:15:40.793 20:08:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:40.793 20:08:48 -- scripts/common.sh@365 -- # decimal 2 00:15:40.793 20:08:48 -- scripts/common.sh@352 -- # local d=2 00:15:40.793 20:08:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.793 20:08:48 -- scripts/common.sh@354 -- # echo 2 00:15:40.793 20:08:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:40.793 20:08:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:40.793 20:08:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:40.793 20:08:48 -- scripts/common.sh@367 -- # return 0 00:15:40.793 20:08:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.793 20:08:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:40.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.793 --rc genhtml_branch_coverage=1 00:15:40.793 --rc genhtml_function_coverage=1 00:15:40.793 --rc genhtml_legend=1 00:15:40.793 --rc geninfo_all_blocks=1 00:15:40.793 --rc geninfo_unexecuted_blocks=1 00:15:40.793 00:15:40.793 ' 00:15:40.793 20:08:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:40.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.793 --rc genhtml_branch_coverage=1 00:15:40.793 --rc genhtml_function_coverage=1 00:15:40.793 --rc genhtml_legend=1 00:15:40.793 --rc geninfo_all_blocks=1 00:15:40.793 --rc geninfo_unexecuted_blocks=1 00:15:40.793 00:15:40.793 ' 00:15:40.793 20:08:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:40.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.793 --rc genhtml_branch_coverage=1 00:15:40.793 --rc genhtml_function_coverage=1 00:15:40.793 --rc genhtml_legend=1 00:15:40.793 --rc geninfo_all_blocks=1 00:15:40.793 --rc geninfo_unexecuted_blocks=1 00:15:40.793 00:15:40.793 ' 00:15:40.793 20:08:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:40.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.793 --rc genhtml_branch_coverage=1 00:15:40.793 --rc genhtml_function_coverage=1 00:15:40.793 --rc genhtml_legend=1 00:15:40.793 --rc geninfo_all_blocks=1 00:15:40.793 --rc geninfo_unexecuted_blocks=1 00:15:40.793 00:15:40.793 ' 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:40.793 20:08:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:40.793 20:08:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.793 20:08:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.793 20:08:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:40.793 20:08:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:40.793 20:08:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.793 20:08:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:40.793 20:08:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:40.793 20:08:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.793 20:08:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.793 20:08:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:40.793 20:08:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:40.793 20:08:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.793 20:08:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.793 20:08:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:40.793 20:08:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:40.793 20:08:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.793 20:08:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.793 20:08:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:40.793 20:08:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:40.793 20:08:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.793 20:08:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.793 20:08:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.793 20:08:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.793 20:08:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:40.793 20:08:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:40.793 20:08:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.793 20:08:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:40.793 20:08:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.793 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71392 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@22 -- # waitforlisten 71392 00:15:40.793 20:08:48 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:40.793 20:08:48 -- common/autotest_common.sh@829 -- # '[' -z 71392 ']' 00:15:40.793 20:08:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.793 20:08:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.793 20:08:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.793 20:08:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.793 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:15:41.052 [2024-12-16 20:08:48.447368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:41.052 [2024-12-16 20:08:48.447458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:15:41.052 [2024-12-16 20:08:48.589311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.310 [2024-12-16 20:08:48.758101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.877 20:08:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.877 20:08:49 -- common/autotest_common.sh@862 -- # return 0 00:15:41.877 20:08:49 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:41.877 20:08:49 -- ftl/common.sh@54 -- # local name=nvme0 00:15:41.877 20:08:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:41.877 20:08:49 -- ftl/common.sh@56 -- # local size=103424 00:15:41.877 20:08:49 -- ftl/common.sh@59 -- # local base_bdev 00:15:41.877 20:08:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:42.135 20:08:49 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:42.135 20:08:49 -- ftl/common.sh@62 -- # local base_size 00:15:42.135 20:08:49 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:42.135 20:08:49 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:42.135 20:08:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:42.135 20:08:49 -- common/autotest_common.sh@1369 -- # local bs 00:15:42.135 20:08:49 -- common/autotest_common.sh@1370 -- # local nb 00:15:42.135 20:08:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:42.135 20:08:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:42.135 { 00:15:42.135 "name": "nvme0n1", 00:15:42.135 "aliases": [ 00:15:42.135 "8f2dbcc8-6248-49a7-8569-c388901c903f" 00:15:42.135 ], 00:15:42.135 "product_name": "NVMe disk", 00:15:42.135 "block_size": 4096, 00:15:42.135 "num_blocks": 1310720, 00:15:42.135 "uuid": "8f2dbcc8-6248-49a7-8569-c388901c903f", 00:15:42.135 "assigned_rate_limits": { 00:15:42.135 "rw_ios_per_sec": 0, 00:15:42.135 "rw_mbytes_per_sec": 0, 00:15:42.135 "r_mbytes_per_sec": 0, 00:15:42.135 "w_mbytes_per_sec": 0 00:15:42.135 }, 00:15:42.135 "claimed": true, 00:15:42.135 "claim_type": "read_many_write_one", 00:15:42.135 "zoned": false, 00:15:42.135 "supported_io_types": { 00:15:42.135 "read": true, 00:15:42.135 "write": true, 00:15:42.135 "unmap": true, 00:15:42.135 "write_zeroes": true, 00:15:42.135 "flush": true, 00:15:42.135 "reset": true, 00:15:42.135 "compare": true, 00:15:42.135 "compare_and_write": false, 00:15:42.135 "abort": true, 00:15:42.135 "nvme_admin": true, 00:15:42.135 "nvme_io": true 00:15:42.135 }, 00:15:42.135 "driver_specific": { 00:15:42.135 "nvme": [ 00:15:42.135 { 00:15:42.135 "pci_address": "0000:00:07.0", 00:15:42.135 "trid": { 00:15:42.135 "trtype": "PCIe", 00:15:42.135 "traddr": "0000:00:07.0" 00:15:42.135 }, 00:15:42.135 "ctrlr_data": { 00:15:42.135 "cntlid": 0, 00:15:42.135 "vendor_id": "0x1b36", 00:15:42.135 "model_number": "QEMU NVMe Ctrl", 00:15:42.135 "serial_number": "12341", 00:15:42.135 "firmware_revision": "8.0.0", 00:15:42.135 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:42.135 "oacs": { 00:15:42.135 "security": 0, 00:15:42.135 "format": 1, 00:15:42.135 "firmware": 0, 00:15:42.135 "ns_manage": 1 00:15:42.135 }, 00:15:42.135 "multi_ctrlr": false, 00:15:42.135 "ana_reporting": false 00:15:42.135 }, 00:15:42.135 "vs": { 00:15:42.135 "nvme_version": "1.4" 00:15:42.135 }, 00:15:42.135 "ns_data": { 00:15:42.135 "id": 1, 00:15:42.135 "can_share": false 00:15:42.135 } 00:15:42.135 } 00:15:42.135 ], 00:15:42.135 "mp_policy": "active_passive" 00:15:42.135 } 00:15:42.135 } 00:15:42.135 ]' 00:15:42.135 20:08:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:42.135 20:08:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:42.135 20:08:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:42.394 20:08:49 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:42.394 20:08:49 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:42.394 20:08:49 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:42.394 20:08:49 -- ftl/common.sh@63 -- # base_size=5120 00:15:42.394 20:08:49 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:42.394 20:08:49 -- ftl/common.sh@67 -- # clear_lvols 00:15:42.394 20:08:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:42.394 20:08:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:42.394 20:08:49 -- ftl/common.sh@28 -- # stores=ac6b55e8-1535-4ce2-91fb-776ea3f7f627 00:15:42.394 20:08:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:42.394 20:08:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac6b55e8-1535-4ce2-91fb-776ea3f7f627 00:15:42.652 20:08:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:42.909 20:08:50 -- ftl/common.sh@68 -- # lvs=170f1612-941a-45ed-b6c8-6ad513631c6b 00:15:42.909 20:08:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 170f1612-941a-45ed-b6c8-6ad513631c6b 00:15:43.167 20:08:50 -- ftl/bdevperf.sh@23 -- # split_bdev=56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.167 20:08:50 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.167 20:08:50 -- ftl/common.sh@35 -- # local name=nvc0 00:15:43.167 20:08:50 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:43.167 20:08:50 -- ftl/common.sh@37 -- # local base_bdev=56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.167 20:08:50 -- ftl/common.sh@38 -- # local cache_size= 00:15:43.167 20:08:50 -- ftl/common.sh@41 -- # get_bdev_size 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.167 20:08:50 -- common/autotest_common.sh@1367 -- # local bdev_name=56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.167 20:08:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:43.167 20:08:50 -- common/autotest_common.sh@1369 -- # local bs 00:15:43.167 20:08:50 -- common/autotest_common.sh@1370 -- # local nb 00:15:43.167 20:08:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.167 20:08:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:43.167 { 00:15:43.167 "name": "56dcbf40-622a-4ff2-b251-171db13742ce", 00:15:43.167 "aliases": [ 00:15:43.167 "lvs/nvme0n1p0" 00:15:43.167 ], 00:15:43.167 "product_name": "Logical Volume", 00:15:43.167 "block_size": 4096, 00:15:43.167 "num_blocks": 26476544, 00:15:43.167 "uuid": "56dcbf40-622a-4ff2-b251-171db13742ce", 00:15:43.167 "assigned_rate_limits": { 00:15:43.167 "rw_ios_per_sec": 0, 00:15:43.167 "rw_mbytes_per_sec": 0, 00:15:43.167 "r_mbytes_per_sec": 0, 00:15:43.167 "w_mbytes_per_sec": 0 00:15:43.167 }, 00:15:43.167 "claimed": false, 00:15:43.167 "zoned": false, 00:15:43.167 "supported_io_types": { 00:15:43.167 "read": true, 00:15:43.167 "write": true, 00:15:43.167 "unmap": true, 00:15:43.167 "write_zeroes": true, 00:15:43.167 "flush": false, 00:15:43.167 "reset": true, 00:15:43.167 "compare": false, 00:15:43.167 "compare_and_write": false, 00:15:43.167 "abort": false, 00:15:43.167 "nvme_admin": false, 00:15:43.167 "nvme_io": false 00:15:43.167 }, 00:15:43.167 "driver_specific": { 00:15:43.167 "lvol": { 00:15:43.167 "lvol_store_uuid": "170f1612-941a-45ed-b6c8-6ad513631c6b", 00:15:43.167 "base_bdev": "nvme0n1", 00:15:43.167 "thin_provision": true, 00:15:43.167 "snapshot": false, 00:15:43.167 "clone": false, 00:15:43.167 "esnap_clone": false 00:15:43.167 } 00:15:43.167 } 00:15:43.167 } 00:15:43.167 ]' 00:15:43.167 20:08:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:43.424 20:08:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:43.424 20:08:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:43.424 20:08:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:43.424 20:08:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:43.424 20:08:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:43.424 20:08:50 -- ftl/common.sh@41 -- # local base_size=5171 00:15:43.424 20:08:50 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:43.424 20:08:50 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:43.682 20:08:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:43.682 20:08:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:43.682 20:08:51 -- ftl/common.sh@48 -- # get_bdev_size 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.682 20:08:51 -- common/autotest_common.sh@1367 -- # local bdev_name=56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.682 20:08:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:43.682 20:08:51 -- common/autotest_common.sh@1369 -- # local bs 00:15:43.682 20:08:51 -- common/autotest_common.sh@1370 -- # local nb 00:15:43.682 20:08:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.682 20:08:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:43.682 { 00:15:43.682 "name": "56dcbf40-622a-4ff2-b251-171db13742ce", 00:15:43.682 "aliases": [ 00:15:43.682 "lvs/nvme0n1p0" 00:15:43.682 ], 00:15:43.682 "product_name": "Logical Volume", 00:15:43.682 "block_size": 4096, 00:15:43.682 "num_blocks": 26476544, 00:15:43.682 "uuid": "56dcbf40-622a-4ff2-b251-171db13742ce", 00:15:43.682 "assigned_rate_limits": { 00:15:43.682 "rw_ios_per_sec": 0, 00:15:43.682 "rw_mbytes_per_sec": 0, 00:15:43.682 "r_mbytes_per_sec": 0, 00:15:43.682 "w_mbytes_per_sec": 0 00:15:43.682 }, 00:15:43.682 "claimed": false, 00:15:43.682 "zoned": false, 00:15:43.682 "supported_io_types": { 00:15:43.682 "read": true, 00:15:43.682 "write": true, 00:15:43.682 "unmap": true, 00:15:43.682 "write_zeroes": true, 00:15:43.682 "flush": false, 00:15:43.682 "reset": true, 00:15:43.682 "compare": false, 00:15:43.682 "compare_and_write": false, 00:15:43.682 "abort": false, 00:15:43.682 "nvme_admin": false, 00:15:43.682 "nvme_io": false 00:15:43.682 }, 00:15:43.682 "driver_specific": { 00:15:43.682 "lvol": { 00:15:43.682 "lvol_store_uuid": "170f1612-941a-45ed-b6c8-6ad513631c6b", 00:15:43.682 "base_bdev": "nvme0n1", 00:15:43.682 "thin_provision": true, 00:15:43.682 "snapshot": false, 00:15:43.682 "clone": false, 00:15:43.682 "esnap_clone": false 00:15:43.682 } 00:15:43.682 } 00:15:43.682 } 00:15:43.682 ]' 00:15:43.682 20:08:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:43.682 20:08:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:43.682 20:08:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:43.940 20:08:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:43.940 20:08:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:43.940 20:08:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:43.940 20:08:51 -- ftl/common.sh@48 -- # cache_size=5171 00:15:43.940 20:08:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:43.940 20:08:51 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:43.940 20:08:51 -- ftl/bdevperf.sh@26 -- # get_bdev_size 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.940 20:08:51 -- common/autotest_common.sh@1367 -- # local bdev_name=56dcbf40-622a-4ff2-b251-171db13742ce 00:15:43.940 20:08:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:43.940 20:08:51 -- common/autotest_common.sh@1369 -- # local bs 00:15:43.940 20:08:51 -- common/autotest_common.sh@1370 -- # local nb 00:15:43.940 20:08:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56dcbf40-622a-4ff2-b251-171db13742ce 00:15:44.197 20:08:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:44.197 { 00:15:44.197 "name": "56dcbf40-622a-4ff2-b251-171db13742ce", 00:15:44.197 "aliases": [ 00:15:44.197 "lvs/nvme0n1p0" 00:15:44.197 ], 00:15:44.197 "product_name": "Logical Volume", 00:15:44.198 "block_size": 4096, 00:15:44.198 "num_blocks": 26476544, 00:15:44.198 "uuid": "56dcbf40-622a-4ff2-b251-171db13742ce", 00:15:44.198 "assigned_rate_limits": { 00:15:44.198 "rw_ios_per_sec": 0, 00:15:44.198 "rw_mbytes_per_sec": 0, 00:15:44.198 "r_mbytes_per_sec": 0, 00:15:44.198 "w_mbytes_per_sec": 0 00:15:44.198 }, 00:15:44.198 "claimed": false, 00:15:44.198 "zoned": false, 00:15:44.198 "supported_io_types": { 00:15:44.198 "read": true, 00:15:44.198 "write": true, 00:15:44.198 "unmap": true, 00:15:44.198 "write_zeroes": true, 00:15:44.198 "flush": false, 00:15:44.198 "reset": true, 00:15:44.198 "compare": false, 00:15:44.198 "compare_and_write": false, 00:15:44.198 "abort": false, 00:15:44.198 "nvme_admin": false, 00:15:44.198 "nvme_io": false 00:15:44.198 }, 00:15:44.198 "driver_specific": { 00:15:44.198 "lvol": { 00:15:44.198 "lvol_store_uuid": "170f1612-941a-45ed-b6c8-6ad513631c6b", 00:15:44.198 "base_bdev": "nvme0n1", 00:15:44.198 "thin_provision": true, 00:15:44.198 "snapshot": false, 00:15:44.198 "clone": false, 00:15:44.198 "esnap_clone": false 00:15:44.198 } 00:15:44.198 } 00:15:44.198 } 00:15:44.198 ]' 00:15:44.198 20:08:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:44.198 20:08:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:44.198 20:08:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:44.198 20:08:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:44.198 20:08:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:44.198 20:08:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:44.198 20:08:51 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:44.198 20:08:51 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 56dcbf40-622a-4ff2-b251-171db13742ce -c nvc0n1p0 --l2p_dram_limit 20 00:15:44.457 [2024-12-16 20:08:51.954778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.954829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:44.457 [2024-12-16 20:08:51.954843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:44.457 [2024-12-16 20:08:51.954851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.954892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.954899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:44.457 [2024-12-16 20:08:51.954908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:44.457 [2024-12-16 20:08:51.954914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.954929] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:44.457 [2024-12-16 20:08:51.955534] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:44.457 [2024-12-16 20:08:51.955553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.955560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:44.457 [2024-12-16 20:08:51.955569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:15:44.457 [2024-12-16 20:08:51.955575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.955597] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 950480fd-358d-4f29-a7d5-599d64680e6a 00:15:44.457 [2024-12-16 20:08:51.956867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.956894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:44.457 [2024-12-16 20:08:51.956902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:15:44.457 [2024-12-16 20:08:51.956911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.963721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.963752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:44.457 [2024-12-16 20:08:51.963760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.772 ms 00:15:44.457 [2024-12-16 20:08:51.963768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.963869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.963878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:44.457 [2024-12-16 20:08:51.963886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:44.457 [2024-12-16 20:08:51.963897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.963931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.963940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:44.457 [2024-12-16 20:08:51.963948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:44.457 [2024-12-16 20:08:51.963955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.963973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:44.457 [2024-12-16 20:08:51.967323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.967346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:44.457 [2024-12-16 20:08:51.967370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.353 ms 00:15:44.457 [2024-12-16 20:08:51.967377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.967405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.967411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:44.457 [2024-12-16 20:08:51.967419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:44.457 [2024-12-16 20:08:51.967424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.457 [2024-12-16 20:08:51.967443] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:44.457 [2024-12-16 20:08:51.967540] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:44.457 [2024-12-16 20:08:51.967555] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:44.457 [2024-12-16 20:08:51.967564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:44.457 [2024-12-16 20:08:51.967574] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:44.457 [2024-12-16 20:08:51.967582] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:44.457 [2024-12-16 20:08:51.967589] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:44.457 [2024-12-16 20:08:51.967596] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:44.457 [2024-12-16 20:08:51.967606] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:44.457 [2024-12-16 20:08:51.967612] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:44.457 [2024-12-16 20:08:51.967619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.457 [2024-12-16 20:08:51.967625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:44.458 [2024-12-16 20:08:51.967634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:15:44.458 [2024-12-16 20:08:51.967639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:51.967686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:51.967693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:44.458 [2024-12-16 20:08:51.967701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:44.458 [2024-12-16 20:08:51.967706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:51.967763] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:44.458 [2024-12-16 20:08:51.967771] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:44.458 [2024-12-16 20:08:51.967778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:44.458 [2024-12-16 20:08:51.967803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:44.458 [2024-12-16 20:08:51.967822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:44.458 [2024-12-16 20:08:51.967834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:44.458 [2024-12-16 20:08:51.967840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:44.458 [2024-12-16 20:08:51.967848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:44.458 [2024-12-16 20:08:51.967854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:44.458 [2024-12-16 20:08:51.967860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:44.458 [2024-12-16 20:08:51.967865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:44.458 [2024-12-16 20:08:51.967879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:44.458 [2024-12-16 20:08:51.967886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:44.458 [2024-12-16 20:08:51.967902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:44.458 [2024-12-16 20:08:51.967907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:44.458 [2024-12-16 20:08:51.967919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:44.458 [2024-12-16 20:08:51.967938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:44.458 [2024-12-16 20:08:51.967954] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:44.458 [2024-12-16 20:08:51.967974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.458 [2024-12-16 20:08:51.967986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:44.458 [2024-12-16 20:08:51.967991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:44.458 [2024-12-16 20:08:51.967999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:44.458 [2024-12-16 20:08:51.968004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:44.458 [2024-12-16 20:08:51.968011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:44.458 [2024-12-16 20:08:51.968015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:44.458 [2024-12-16 20:08:51.968021] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:44.458 [2024-12-16 20:08:51.968026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:44.458 [2024-12-16 20:08:51.968033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:44.458 [2024-12-16 20:08:51.968039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.458 [2024-12-16 20:08:51.968047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:44.458 [2024-12-16 20:08:51.968052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:44.458 [2024-12-16 20:08:51.968058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:44.458 [2024-12-16 20:08:51.968064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:44.458 [2024-12-16 20:08:51.968072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:44.458 [2024-12-16 20:08:51.968077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:44.458 [2024-12-16 20:08:51.968084] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:44.458 [2024-12-16 20:08:51.968092] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:44.458 [2024-12-16 20:08:51.968103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:44.458 [2024-12-16 20:08:51.968110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:44.458 [2024-12-16 20:08:51.968116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:44.458 [2024-12-16 20:08:51.968123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:44.458 [2024-12-16 20:08:51.968129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:44.458 [2024-12-16 20:08:51.968135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:44.458 [2024-12-16 20:08:51.968142] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:44.458 [2024-12-16 20:08:51.968148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:44.458 [2024-12-16 20:08:51.968155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:44.458 [2024-12-16 20:08:51.968160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:44.458 [2024-12-16 20:08:51.968168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:44.458 [2024-12-16 20:08:51.968174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:44.458 [2024-12-16 20:08:51.968182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:44.458 [2024-12-16 20:08:51.968187] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:44.458 [2024-12-16 20:08:51.968196] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:44.458 [2024-12-16 20:08:51.968203] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:44.458 [2024-12-16 20:08:51.968210] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:44.458 [2024-12-16 20:08:51.968217] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:44.458 [2024-12-16 20:08:51.968224] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:44.458 [2024-12-16 20:08:51.968230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:51.968237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:44.458 [2024-12-16 20:08:51.968243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:15:44.458 [2024-12-16 20:08:51.968251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:51.982415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:51.982448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:44.458 [2024-12-16 20:08:51.982457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.758 ms 00:15:44.458 [2024-12-16 20:08:51.982465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:51.982533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:51.982543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:44.458 [2024-12-16 20:08:51.982550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:44.458 [2024-12-16 20:08:51.982559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:52.019399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:52.019545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:44.458 [2024-12-16 20:08:52.019562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.805 ms 00:15:44.458 [2024-12-16 20:08:52.019571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:52.019600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:52.019611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:44.458 [2024-12-16 20:08:52.019618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:44.458 [2024-12-16 20:08:52.019625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:52.020035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:52.020052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:44.458 [2024-12-16 20:08:52.020060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:15:44.458 [2024-12-16 20:08:52.020068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.458 [2024-12-16 20:08:52.020160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.458 [2024-12-16 20:08:52.020172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:44.459 [2024-12-16 20:08:52.020181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:15:44.459 [2024-12-16 20:08:52.020189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.459 [2024-12-16 20:08:52.032787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.459 [2024-12-16 20:08:52.032814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:44.459 [2024-12-16 20:08:52.032824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.585 ms 00:15:44.459 [2024-12-16 20:08:52.032832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.459 [2024-12-16 20:08:52.042935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:44.459 [2024-12-16 20:08:52.048425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.459 [2024-12-16 20:08:52.048450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:44.459 [2024-12-16 20:08:52.048460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.532 ms 00:15:44.459 [2024-12-16 20:08:52.048466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.717 [2024-12-16 20:08:52.128770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.717 [2024-12-16 20:08:52.128807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:44.717 [2024-12-16 20:08:52.128818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.280 ms 00:15:44.717 [2024-12-16 20:08:52.128825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.717 [2024-12-16 20:08:52.128856] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:44.717 [2024-12-16 20:08:52.128865] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:47.998 [2024-12-16 20:08:55.405936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.406005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:47.998 [2024-12-16 20:08:55.406021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3277.059 ms 00:15:47.998 [2024-12-16 20:08:55.406029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.406205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.406215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:47.998 [2024-12-16 20:08:55.406225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:15:47.998 [2024-12-16 20:08:55.406231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.425344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.425375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:47.998 [2024-12-16 20:08:55.425387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.076 ms 00:15:47.998 [2024-12-16 20:08:55.425396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.443157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.443184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:47.998 [2024-12-16 20:08:55.443196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.729 ms 00:15:47.998 [2024-12-16 20:08:55.443203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.443482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.443493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:47.998 [2024-12-16 20:08:55.443502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:15:47.998 [2024-12-16 20:08:55.443509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.496921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.497078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:47.998 [2024-12-16 20:08:55.497096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.380 ms 00:15:47.998 [2024-12-16 20:08:55.497103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.517280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.517323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:47.998 [2024-12-16 20:08:55.517335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.146 ms 00:15:47.998 [2024-12-16 20:08:55.517341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.518448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.518473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:47.998 [2024-12-16 20:08:55.518485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:15:47.998 [2024-12-16 20:08:55.518493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.537685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.537714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.998 [2024-12-16 20:08:55.537725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:15:47.998 [2024-12-16 20:08:55.537731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.998 [2024-12-16 20:08:55.537763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.998 [2024-12-16 20:08:55.537771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.998 [2024-12-16 20:08:55.537781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.998 [2024-12-16 20:08:55.537787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.999 [2024-12-16 20:08:55.537856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.999 [2024-12-16 20:08:55.537864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.999 [2024-12-16 20:08:55.537873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:47.999 [2024-12-16 20:08:55.537879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.999 [2024-12-16 20:08:55.538821] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3583.660 ms, result 0 00:15:47.999 { 00:15:47.999 "name": "ftl0", 00:15:47.999 "uuid": "950480fd-358d-4f29-a7d5-599d64680e6a" 00:15:47.999 } 00:15:47.999 20:08:55 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:47.999 20:08:55 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:47.999 20:08:55 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:48.256 20:08:55 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:48.256 [2024-12-16 20:08:55.834824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:48.256 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:48.256 Zero copy mechanism will not be used. 00:15:48.256 Running I/O for 4 seconds... 00:15:52.444 00:15:52.444 Latency(us) 00:15:52.444 [2024-12-16T20:09:00.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.444 [2024-12-16T20:09:00.084Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:52.444 ftl0 : 4.00 756.57 50.24 0.00 0.00 1396.27 392.27 3604.48 00:15:52.444 [2024-12-16T20:09:00.084Z] =================================================================================================================== 00:15:52.444 [2024-12-16T20:09:00.084Z] Total : 756.57 50.24 0.00 0.00 1396.27 392.27 3604.48 00:15:52.444 [2024-12-16 20:08:59.841899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:52.444 0 00:15:52.444 20:08:59 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:52.444 [2024-12-16 20:08:59.942560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:52.444 Running I/O for 4 seconds... 00:15:56.825 00:15:56.825 Latency(us) 00:15:56.825 [2024-12-16T20:09:04.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.825 [2024-12-16T20:09:04.465Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.825 ftl0 : 4.03 4893.03 19.11 0.00 0.00 26048.91 415.90 47185.92 00:15:56.825 [2024-12-16T20:09:04.465Z] =================================================================================================================== 00:15:56.825 [2024-12-16T20:09:04.465Z] Total : 4893.03 19.11 0.00 0.00 26048.91 0.00 47185.92 00:15:56.825 0 00:15:56.825 [2024-12-16 20:09:03.984228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:56.825 20:09:03 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:56.825 [2024-12-16 20:09:04.102441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:56.825 Running I/O for 4 seconds... 00:16:01.035 00:16:01.035 Latency(us) 00:16:01.035 [2024-12-16T20:09:08.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.035 [2024-12-16T20:09:08.675Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.035 Verification LBA range: start 0x0 length 0x1400000 00:16:01.035 ftl0 : 4.01 7988.72 31.21 0.00 0.00 15982.97 215.83 30852.33 00:16:01.035 [2024-12-16T20:09:08.675Z] =================================================================================================================== 00:16:01.035 [2024-12-16T20:09:08.675Z] Total : 7988.72 31.21 0.00 0.00 15982.97 0.00 30852.33 00:16:01.035 0 00:16:01.035 [2024-12-16 20:09:08.128512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:01.035 20:09:08 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:01.035 [2024-12-16 20:09:08.335887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.035 [2024-12-16 20:09:08.336132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:01.035 [2024-12-16 20:09:08.336163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:01.035 [2024-12-16 20:09:08.336173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.035 [2024-12-16 20:09:08.336207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:01.035 [2024-12-16 20:09:08.339208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.035 [2024-12-16 20:09:08.339441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:01.035 [2024-12-16 20:09:08.339466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:16:01.035 [2024-12-16 20:09:08.339482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.035 [2024-12-16 20:09:08.342641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.035 [2024-12-16 20:09:08.342817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:01.035 [2024-12-16 20:09:08.342838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:16:01.036 [2024-12-16 20:09:08.342849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.036 [2024-12-16 20:09:08.573530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.036 [2024-12-16 20:09:08.573748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:01.036 [2024-12-16 20:09:08.573776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 230.656 ms 00:16:01.036 [2024-12-16 20:09:08.573787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.036 [2024-12-16 20:09:08.579961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.036 [2024-12-16 20:09:08.580016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:01.036 [2024-12-16 20:09:08.580030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:16:01.036 [2024-12-16 20:09:08.580040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.036 [2024-12-16 20:09:08.607850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.036 [2024-12-16 20:09:08.608050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:01.036 [2024-12-16 20:09:08.608071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.728 ms 00:16:01.036 [2024-12-16 20:09:08.608086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.036 [2024-12-16 20:09:08.626587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.036 [2024-12-16 20:09:08.626799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:01.036 [2024-12-16 20:09:08.626824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.431 ms 00:16:01.036 [2024-12-16 20:09:08.626834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.036 [2024-12-16 20:09:08.626995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.036 [2024-12-16 20:09:08.627010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:01.036 [2024-12-16 20:09:08.627019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:01.036 [2024-12-16 20:09:08.627029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.036 [2024-12-16 20:09:08.654069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.036 [2024-12-16 20:09:08.654258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:01.036 [2024-12-16 20:09:08.654280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.023 ms 00:16:01.036 [2024-12-16 20:09:08.654290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.299 [2024-12-16 20:09:08.681192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.299 [2024-12-16 20:09:08.681461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:01.299 [2024-12-16 20:09:08.681491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.588 ms 00:16:01.299 [2024-12-16 20:09:08.681506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.299 [2024-12-16 20:09:08.707424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.299 [2024-12-16 20:09:08.707480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:01.299 [2024-12-16 20:09:08.707493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.845 ms 00:16:01.299 [2024-12-16 20:09:08.707501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.299 [2024-12-16 20:09:08.733349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.299 [2024-12-16 20:09:08.733401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:01.299 [2024-12-16 20:09:08.733413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.743 ms 00:16:01.299 [2024-12-16 20:09:08.733421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.299 [2024-12-16 20:09:08.733469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:01.299 [2024-12-16 20:09:08.733488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:01.299 [2024-12-16 20:09:08.733742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.733995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:01.300 [2024-12-16 20:09:08.734415] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:01.300 [2024-12-16 20:09:08.734424] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 950480fd-358d-4f29-a7d5-599d64680e6a 00:16:01.300 [2024-12-16 20:09:08.734437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:01.300 [2024-12-16 20:09:08.734451] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:01.300 [2024-12-16 20:09:08.734461] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:01.300 [2024-12-16 20:09:08.734469] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:01.300 [2024-12-16 20:09:08.734479] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:01.300 [2024-12-16 20:09:08.734489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:01.300 [2024-12-16 20:09:08.734499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:01.300 [2024-12-16 20:09:08.734505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:01.300 [2024-12-16 20:09:08.734513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:01.300 [2024-12-16 20:09:08.734520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.300 [2024-12-16 20:09:08.734532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:01.300 [2024-12-16 20:09:08.734541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:16:01.300 [2024-12-16 20:09:08.734550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.300 [2024-12-16 20:09:08.748756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.300 [2024-12-16 20:09:08.748804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:01.300 [2024-12-16 20:09:08.748816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.152 ms 00:16:01.300 [2024-12-16 20:09:08.748831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.300 [2024-12-16 20:09:08.749049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.300 [2024-12-16 20:09:08.749061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:01.300 [2024-12-16 20:09:08.749070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:01.300 [2024-12-16 20:09:08.749080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.300 [2024-12-16 20:09:08.790973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.300 [2024-12-16 20:09:08.791025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:01.300 [2024-12-16 20:09:08.791040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.791049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.791122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.791134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:01.301 [2024-12-16 20:09:08.791142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.791152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.791234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.791248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:01.301 [2024-12-16 20:09:08.791256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.791271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.791288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.791318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:01.301 [2024-12-16 20:09:08.791326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.791336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.873460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.873517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:01.301 [2024-12-16 20:09:08.873531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.873544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.906490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.906543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:01.301 [2024-12-16 20:09:08.906555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.906565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.906636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.906648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.301 [2024-12-16 20:09:08.906656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.906670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.906716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.906729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.301 [2024-12-16 20:09:08.906737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.906747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.906854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.906866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.301 [2024-12-16 20:09:08.906875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.906885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.906921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.906935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:01.301 [2024-12-16 20:09:08.906943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.906953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.906999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.907010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.301 [2024-12-16 20:09:08.907018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.907030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.907083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:01.301 [2024-12-16 20:09:08.907095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.301 [2024-12-16 20:09:08.907103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:01.301 [2024-12-16 20:09:08.907112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.301 [2024-12-16 20:09:08.907261] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.328 ms, result 0 00:16:01.301 true 00:16:01.301 20:09:08 -- ftl/bdevperf.sh@37 -- # killprocess 71392 00:16:01.301 20:09:08 -- common/autotest_common.sh@936 -- # '[' -z 71392 ']' 00:16:01.301 20:09:08 -- common/autotest_common.sh@940 -- # kill -0 71392 00:16:01.301 20:09:08 -- common/autotest_common.sh@941 -- # uname 00:16:01.561 20:09:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.561 20:09:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71392 00:16:01.561 20:09:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:01.561 killing process with pid 71392 00:16:01.561 20:09:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:01.561 20:09:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71392' 00:16:01.561 Received shutdown signal, test time was about 4.000000 seconds 00:16:01.561 00:16:01.561 Latency(us) 00:16:01.561 [2024-12-16T20:09:09.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.561 [2024-12-16T20:09:09.201Z] =================================================================================================================== 00:16:01.561 [2024-12-16T20:09:09.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.561 20:09:08 -- common/autotest_common.sh@955 -- # kill 71392 00:16:01.561 20:09:08 -- common/autotest_common.sh@960 -- # wait 71392 00:16:02.505 20:09:09 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:02.505 20:09:09 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:02.505 20:09:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:02.505 20:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:02.505 Remove shared memory files 00:16:02.505 20:09:09 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:02.505 20:09:09 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:02.505 20:09:09 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:02.505 20:09:09 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:02.505 20:09:09 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:02.505 20:09:09 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:02.505 20:09:09 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:02.505 ************************************ 00:16:02.505 END TEST ftl_bdevperf 00:16:02.505 ************************************ 00:16:02.505 00:16:02.505 real 0m21.660s 00:16:02.505 user 0m24.006s 00:16:02.505 sys 0m0.922s 00:16:02.505 20:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:02.505 20:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:02.505 20:09:09 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:02.505 20:09:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:02.505 20:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.505 20:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:02.505 ************************************ 00:16:02.505 START TEST ftl_trim 00:16:02.505 ************************************ 00:16:02.505 20:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:02.505 * Looking for test storage... 00:16:02.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.505 20:09:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:02.505 20:09:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:02.505 20:09:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:02.505 20:09:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:02.505 20:09:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:02.505 20:09:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:02.505 20:09:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:02.505 20:09:10 -- scripts/common.sh@335 -- # IFS=.-: 00:16:02.505 20:09:10 -- scripts/common.sh@335 -- # read -ra ver1 00:16:02.505 20:09:10 -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.505 20:09:10 -- scripts/common.sh@336 -- # read -ra ver2 00:16:02.505 20:09:10 -- scripts/common.sh@337 -- # local 'op=<' 00:16:02.505 20:09:10 -- scripts/common.sh@339 -- # ver1_l=2 00:16:02.505 20:09:10 -- scripts/common.sh@340 -- # ver2_l=1 00:16:02.505 20:09:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:02.505 20:09:10 -- scripts/common.sh@343 -- # case "$op" in 00:16:02.505 20:09:10 -- scripts/common.sh@344 -- # : 1 00:16:02.505 20:09:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:02.505 20:09:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.505 20:09:10 -- scripts/common.sh@364 -- # decimal 1 00:16:02.505 20:09:10 -- scripts/common.sh@352 -- # local d=1 00:16:02.505 20:09:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.505 20:09:10 -- scripts/common.sh@354 -- # echo 1 00:16:02.505 20:09:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:02.505 20:09:10 -- scripts/common.sh@365 -- # decimal 2 00:16:02.505 20:09:10 -- scripts/common.sh@352 -- # local d=2 00:16:02.505 20:09:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.505 20:09:10 -- scripts/common.sh@354 -- # echo 2 00:16:02.505 20:09:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:02.505 20:09:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:02.505 20:09:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:02.505 20:09:10 -- scripts/common.sh@367 -- # return 0 00:16:02.505 20:09:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.505 20:09:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:02.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.505 --rc genhtml_branch_coverage=1 00:16:02.505 --rc genhtml_function_coverage=1 00:16:02.505 --rc genhtml_legend=1 00:16:02.505 --rc geninfo_all_blocks=1 00:16:02.505 --rc geninfo_unexecuted_blocks=1 00:16:02.505 00:16:02.505 ' 00:16:02.505 20:09:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:02.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.505 --rc genhtml_branch_coverage=1 00:16:02.505 --rc genhtml_function_coverage=1 00:16:02.505 --rc genhtml_legend=1 00:16:02.505 --rc geninfo_all_blocks=1 00:16:02.505 --rc geninfo_unexecuted_blocks=1 00:16:02.505 00:16:02.505 ' 00:16:02.505 20:09:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:02.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.505 --rc genhtml_branch_coverage=1 00:16:02.505 --rc genhtml_function_coverage=1 00:16:02.505 --rc genhtml_legend=1 00:16:02.505 --rc geninfo_all_blocks=1 00:16:02.505 --rc geninfo_unexecuted_blocks=1 00:16:02.505 00:16:02.505 ' 00:16:02.505 20:09:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:02.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.505 --rc genhtml_branch_coverage=1 00:16:02.505 --rc genhtml_function_coverage=1 00:16:02.505 --rc genhtml_legend=1 00:16:02.505 --rc geninfo_all_blocks=1 00:16:02.505 --rc geninfo_unexecuted_blocks=1 00:16:02.505 00:16:02.505 ' 00:16:02.505 20:09:10 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:02.505 20:09:10 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:02.505 20:09:10 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.505 20:09:10 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.505 20:09:10 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:02.505 20:09:10 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:02.505 20:09:10 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.505 20:09:10 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:02.505 20:09:10 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:02.505 20:09:10 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.505 20:09:10 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.505 20:09:10 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:02.505 20:09:10 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:02.505 20:09:10 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.505 20:09:10 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.505 20:09:10 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:02.505 20:09:10 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:02.505 20:09:10 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.505 20:09:10 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.505 20:09:10 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:02.505 20:09:10 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:02.505 20:09:10 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.505 20:09:10 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.505 20:09:10 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.505 20:09:10 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.505 20:09:10 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:02.505 20:09:10 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:02.506 20:09:10 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.506 20:09:10 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.506 20:09:10 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.506 20:09:10 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:02.506 20:09:10 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:02.506 20:09:10 -- ftl/trim.sh@25 -- # timeout=240 00:16:02.506 20:09:10 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:02.506 20:09:10 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:02.506 20:09:10 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:02.506 20:09:10 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:02.506 20:09:10 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:02.506 20:09:10 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:02.506 20:09:10 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:02.506 20:09:10 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:02.506 20:09:10 -- ftl/trim.sh@40 -- # svcpid=71752 00:16:02.506 20:09:10 -- ftl/trim.sh@41 -- # waitforlisten 71752 00:16:02.506 20:09:10 -- common/autotest_common.sh@829 -- # '[' -z 71752 ']' 00:16:02.506 20:09:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.506 20:09:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.506 20:09:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.506 20:09:10 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:02.506 20:09:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.506 20:09:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.767 [2024-12-16 20:09:10.220467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:02.767 [2024-12-16 20:09:10.220609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71752 ] 00:16:02.767 [2024-12-16 20:09:10.372029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.028 [2024-12-16 20:09:10.598134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.028 [2024-12-16 20:09:10.598617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.028 [2024-12-16 20:09:10.599003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.028 [2024-12-16 20:09:10.599122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.414 20:09:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.414 20:09:11 -- common/autotest_common.sh@862 -- # return 0 00:16:04.414 20:09:11 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:04.414 20:09:11 -- ftl/common.sh@54 -- # local name=nvme0 00:16:04.414 20:09:11 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:04.414 20:09:11 -- ftl/common.sh@56 -- # local size=103424 00:16:04.414 20:09:11 -- ftl/common.sh@59 -- # local base_bdev 00:16:04.414 20:09:11 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:04.414 20:09:12 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:04.414 20:09:12 -- ftl/common.sh@62 -- # local base_size 00:16:04.414 20:09:12 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:04.414 20:09:12 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:04.414 20:09:12 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:04.414 20:09:12 -- common/autotest_common.sh@1369 -- # local bs 00:16:04.414 20:09:12 -- common/autotest_common.sh@1370 -- # local nb 00:16:04.414 20:09:12 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:04.675 20:09:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:04.675 { 00:16:04.675 "name": "nvme0n1", 00:16:04.675 "aliases": [ 00:16:04.675 "02133f01-3d89-4f67-95c8-53b7122a8446" 00:16:04.675 ], 00:16:04.675 "product_name": "NVMe disk", 00:16:04.675 "block_size": 4096, 00:16:04.675 "num_blocks": 1310720, 00:16:04.675 "uuid": "02133f01-3d89-4f67-95c8-53b7122a8446", 00:16:04.675 "assigned_rate_limits": { 00:16:04.675 "rw_ios_per_sec": 0, 00:16:04.675 "rw_mbytes_per_sec": 0, 00:16:04.675 "r_mbytes_per_sec": 0, 00:16:04.675 "w_mbytes_per_sec": 0 00:16:04.675 }, 00:16:04.675 "claimed": true, 00:16:04.676 "claim_type": "read_many_write_one", 00:16:04.676 "zoned": false, 00:16:04.676 "supported_io_types": { 00:16:04.676 "read": true, 00:16:04.676 "write": true, 00:16:04.676 "unmap": true, 00:16:04.676 "write_zeroes": true, 00:16:04.676 "flush": true, 00:16:04.676 "reset": true, 00:16:04.676 "compare": true, 00:16:04.676 "compare_and_write": false, 00:16:04.676 "abort": true, 00:16:04.676 "nvme_admin": true, 00:16:04.676 "nvme_io": true 00:16:04.676 }, 00:16:04.676 "driver_specific": { 00:16:04.676 "nvme": [ 00:16:04.676 { 00:16:04.676 "pci_address": "0000:00:07.0", 00:16:04.676 "trid": { 00:16:04.676 "trtype": "PCIe", 00:16:04.676 "traddr": "0000:00:07.0" 00:16:04.676 }, 00:16:04.676 "ctrlr_data": { 00:16:04.676 "cntlid": 0, 00:16:04.676 "vendor_id": "0x1b36", 00:16:04.676 "model_number": "QEMU NVMe Ctrl", 00:16:04.676 "serial_number": "12341", 00:16:04.676 "firmware_revision": "8.0.0", 00:16:04.676 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:04.676 "oacs": { 00:16:04.676 "security": 0, 00:16:04.676 "format": 1, 00:16:04.676 "firmware": 0, 00:16:04.676 "ns_manage": 1 00:16:04.676 }, 00:16:04.676 "multi_ctrlr": false, 00:16:04.676 "ana_reporting": false 00:16:04.676 }, 00:16:04.676 "vs": { 00:16:04.676 "nvme_version": "1.4" 00:16:04.676 }, 00:16:04.676 "ns_data": { 00:16:04.676 "id": 1, 00:16:04.676 "can_share": false 00:16:04.676 } 00:16:04.676 } 00:16:04.676 ], 00:16:04.676 "mp_policy": "active_passive" 00:16:04.676 } 00:16:04.676 } 00:16:04.676 ]' 00:16:04.676 20:09:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:04.676 20:09:12 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:04.676 20:09:12 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:04.676 20:09:12 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:04.676 20:09:12 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:04.676 20:09:12 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:04.676 20:09:12 -- ftl/common.sh@63 -- # base_size=5120 00:16:04.676 20:09:12 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:04.676 20:09:12 -- ftl/common.sh@67 -- # clear_lvols 00:16:04.676 20:09:12 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:04.676 20:09:12 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:04.937 20:09:12 -- ftl/common.sh@28 -- # stores=170f1612-941a-45ed-b6c8-6ad513631c6b 00:16:04.937 20:09:12 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:04.937 20:09:12 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 170f1612-941a-45ed-b6c8-6ad513631c6b 00:16:05.198 20:09:12 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:05.459 20:09:12 -- ftl/common.sh@68 -- # lvs=8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188 00:16:05.459 20:09:12 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188 00:16:05.720 20:09:13 -- ftl/trim.sh@43 -- # split_bdev=2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.720 20:09:13 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.720 20:09:13 -- ftl/common.sh@35 -- # local name=nvc0 00:16:05.720 20:09:13 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:05.720 20:09:13 -- ftl/common.sh@37 -- # local base_bdev=2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.720 20:09:13 -- ftl/common.sh@38 -- # local cache_size= 00:16:05.720 20:09:13 -- ftl/common.sh@41 -- # get_bdev_size 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.720 20:09:13 -- common/autotest_common.sh@1367 -- # local bdev_name=2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.720 20:09:13 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:05.721 20:09:13 -- common/autotest_common.sh@1369 -- # local bs 00:16:05.721 20:09:13 -- common/autotest_common.sh@1370 -- # local nb 00:16:05.721 20:09:13 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.721 20:09:13 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:05.721 { 00:16:05.721 "name": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:05.721 "aliases": [ 00:16:05.721 "lvs/nvme0n1p0" 00:16:05.721 ], 00:16:05.721 "product_name": "Logical Volume", 00:16:05.721 "block_size": 4096, 00:16:05.721 "num_blocks": 26476544, 00:16:05.721 "uuid": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:05.721 "assigned_rate_limits": { 00:16:05.721 "rw_ios_per_sec": 0, 00:16:05.721 "rw_mbytes_per_sec": 0, 00:16:05.721 "r_mbytes_per_sec": 0, 00:16:05.721 "w_mbytes_per_sec": 0 00:16:05.721 }, 00:16:05.721 "claimed": false, 00:16:05.721 "zoned": false, 00:16:05.721 "supported_io_types": { 00:16:05.721 "read": true, 00:16:05.721 "write": true, 00:16:05.721 "unmap": true, 00:16:05.721 "write_zeroes": true, 00:16:05.721 "flush": false, 00:16:05.721 "reset": true, 00:16:05.721 "compare": false, 00:16:05.721 "compare_and_write": false, 00:16:05.721 "abort": false, 00:16:05.721 "nvme_admin": false, 00:16:05.721 "nvme_io": false 00:16:05.721 }, 00:16:05.721 "driver_specific": { 00:16:05.721 "lvol": { 00:16:05.721 "lvol_store_uuid": "8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188", 00:16:05.721 "base_bdev": "nvme0n1", 00:16:05.721 "thin_provision": true, 00:16:05.721 "snapshot": false, 00:16:05.721 "clone": false, 00:16:05.721 "esnap_clone": false 00:16:05.721 } 00:16:05.721 } 00:16:05.721 } 00:16:05.721 ]' 00:16:05.721 20:09:13 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:05.721 20:09:13 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:05.721 20:09:13 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:05.982 20:09:13 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:05.982 20:09:13 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:05.982 20:09:13 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:05.982 20:09:13 -- ftl/common.sh@41 -- # local base_size=5171 00:16:05.982 20:09:13 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:05.982 20:09:13 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:05.982 20:09:13 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:05.982 20:09:13 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:05.982 20:09:13 -- ftl/common.sh@48 -- # get_bdev_size 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.982 20:09:13 -- common/autotest_common.sh@1367 -- # local bdev_name=2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:05.982 20:09:13 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:05.982 20:09:13 -- common/autotest_common.sh@1369 -- # local bs 00:16:05.982 20:09:13 -- common/autotest_common.sh@1370 -- # local nb 00:16:05.982 20:09:13 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:06.244 20:09:13 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:06.244 { 00:16:06.244 "name": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:06.244 "aliases": [ 00:16:06.244 "lvs/nvme0n1p0" 00:16:06.244 ], 00:16:06.244 "product_name": "Logical Volume", 00:16:06.244 "block_size": 4096, 00:16:06.244 "num_blocks": 26476544, 00:16:06.244 "uuid": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:06.244 "assigned_rate_limits": { 00:16:06.244 "rw_ios_per_sec": 0, 00:16:06.244 "rw_mbytes_per_sec": 0, 00:16:06.244 "r_mbytes_per_sec": 0, 00:16:06.244 "w_mbytes_per_sec": 0 00:16:06.244 }, 00:16:06.244 "claimed": false, 00:16:06.244 "zoned": false, 00:16:06.244 "supported_io_types": { 00:16:06.244 "read": true, 00:16:06.244 "write": true, 00:16:06.244 "unmap": true, 00:16:06.244 "write_zeroes": true, 00:16:06.244 "flush": false, 00:16:06.244 "reset": true, 00:16:06.244 "compare": false, 00:16:06.244 "compare_and_write": false, 00:16:06.244 "abort": false, 00:16:06.244 "nvme_admin": false, 00:16:06.244 "nvme_io": false 00:16:06.244 }, 00:16:06.244 "driver_specific": { 00:16:06.244 "lvol": { 00:16:06.244 "lvol_store_uuid": "8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188", 00:16:06.244 "base_bdev": "nvme0n1", 00:16:06.244 "thin_provision": true, 00:16:06.244 "snapshot": false, 00:16:06.244 "clone": false, 00:16:06.244 "esnap_clone": false 00:16:06.244 } 00:16:06.244 } 00:16:06.244 } 00:16:06.244 ]' 00:16:06.244 20:09:13 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:06.244 20:09:13 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:06.244 20:09:13 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:06.244 20:09:13 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:06.244 20:09:13 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:06.244 20:09:13 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:06.244 20:09:13 -- ftl/common.sh@48 -- # cache_size=5171 00:16:06.244 20:09:13 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:06.505 20:09:14 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:06.505 20:09:14 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:06.505 20:09:14 -- ftl/trim.sh@47 -- # get_bdev_size 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:06.505 20:09:14 -- common/autotest_common.sh@1367 -- # local bdev_name=2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:06.505 20:09:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:06.505 20:09:14 -- common/autotest_common.sh@1369 -- # local bs 00:16:06.505 20:09:14 -- common/autotest_common.sh@1370 -- # local nb 00:16:06.506 20:09:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 00:16:06.767 20:09:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:06.767 { 00:16:06.767 "name": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:06.767 "aliases": [ 00:16:06.767 "lvs/nvme0n1p0" 00:16:06.767 ], 00:16:06.767 "product_name": "Logical Volume", 00:16:06.767 "block_size": 4096, 00:16:06.767 "num_blocks": 26476544, 00:16:06.767 "uuid": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:06.767 "assigned_rate_limits": { 00:16:06.767 "rw_ios_per_sec": 0, 00:16:06.767 "rw_mbytes_per_sec": 0, 00:16:06.767 "r_mbytes_per_sec": 0, 00:16:06.767 "w_mbytes_per_sec": 0 00:16:06.767 }, 00:16:06.767 "claimed": false, 00:16:06.767 "zoned": false, 00:16:06.767 "supported_io_types": { 00:16:06.767 "read": true, 00:16:06.767 "write": true, 00:16:06.767 "unmap": true, 00:16:06.767 "write_zeroes": true, 00:16:06.767 "flush": false, 00:16:06.767 "reset": true, 00:16:06.767 "compare": false, 00:16:06.767 "compare_and_write": false, 00:16:06.767 "abort": false, 00:16:06.767 "nvme_admin": false, 00:16:06.767 "nvme_io": false 00:16:06.767 }, 00:16:06.767 "driver_specific": { 00:16:06.767 "lvol": { 00:16:06.767 "lvol_store_uuid": "8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188", 00:16:06.767 "base_bdev": "nvme0n1", 00:16:06.767 "thin_provision": true, 00:16:06.767 "snapshot": false, 00:16:06.767 "clone": false, 00:16:06.767 "esnap_clone": false 00:16:06.767 } 00:16:06.767 } 00:16:06.767 } 00:16:06.767 ]' 00:16:06.767 20:09:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:06.767 20:09:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:06.767 20:09:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:06.767 20:09:14 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:06.767 20:09:14 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:06.767 20:09:14 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:06.767 20:09:14 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:06.767 20:09:14 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:07.030 [2024-12-16 20:09:14.455997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.456042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:07.030 [2024-12-16 20:09:14.456058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:07.030 [2024-12-16 20:09:14.456066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.030 [2024-12-16 20:09:14.458772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.458807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:07.030 [2024-12-16 20:09:14.458819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:16:07.030 [2024-12-16 20:09:14.458827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.030 [2024-12-16 20:09:14.458926] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:07.030 [2024-12-16 20:09:14.459665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:07.030 [2024-12-16 20:09:14.459693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.459700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:07.030 [2024-12-16 20:09:14.459711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:16:07.030 [2024-12-16 20:09:14.459719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.030 [2024-12-16 20:09:14.460186] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fa9ee28d-6188-48e1-a96b-343e6273784c 00:16:07.030 [2024-12-16 20:09:14.461215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.461253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:07.030 [2024-12-16 20:09:14.461264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:07.030 [2024-12-16 20:09:14.461274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.030 [2024-12-16 20:09:14.466169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.466203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:07.030 [2024-12-16 20:09:14.466212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.806 ms 00:16:07.030 [2024-12-16 20:09:14.466221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.030 [2024-12-16 20:09:14.466352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.466365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:07.030 [2024-12-16 20:09:14.466374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:07.030 [2024-12-16 20:09:14.466386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.030 [2024-12-16 20:09:14.466425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.030 [2024-12-16 20:09:14.466434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:07.030 [2024-12-16 20:09:14.466441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:07.030 [2024-12-16 20:09:14.466450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.031 [2024-12-16 20:09:14.466484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:07.031 [2024-12-16 20:09:14.470124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.031 [2024-12-16 20:09:14.470154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:07.031 [2024-12-16 20:09:14.470166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.647 ms 00:16:07.031 [2024-12-16 20:09:14.470173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.031 [2024-12-16 20:09:14.470233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.031 [2024-12-16 20:09:14.470242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:07.031 [2024-12-16 20:09:14.470251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:07.031 [2024-12-16 20:09:14.470258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.031 [2024-12-16 20:09:14.470307] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:07.031 [2024-12-16 20:09:14.470421] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:07.031 [2024-12-16 20:09:14.470441] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:07.031 [2024-12-16 20:09:14.470451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:07.031 [2024-12-16 20:09:14.470463] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470471] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470482] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:07.031 [2024-12-16 20:09:14.470489] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:07.031 [2024-12-16 20:09:14.470498] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:07.031 [2024-12-16 20:09:14.470506] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:07.031 [2024-12-16 20:09:14.470514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.031 [2024-12-16 20:09:14.470522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:07.031 [2024-12-16 20:09:14.470530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:16:07.031 [2024-12-16 20:09:14.470537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.031 [2024-12-16 20:09:14.470611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.031 [2024-12-16 20:09:14.470619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:07.031 [2024-12-16 20:09:14.470629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:07.031 [2024-12-16 20:09:14.470635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.031 [2024-12-16 20:09:14.470727] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:07.031 [2024-12-16 20:09:14.470742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:07.031 [2024-12-16 20:09:14.470751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:07.031 [2024-12-16 20:09:14.470773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:07.031 [2024-12-16 20:09:14.470796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:07.031 [2024-12-16 20:09:14.470810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:07.031 [2024-12-16 20:09:14.470818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:07.031 [2024-12-16 20:09:14.470827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:07.031 [2024-12-16 20:09:14.470833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:07.031 [2024-12-16 20:09:14.470843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:07.031 [2024-12-16 20:09:14.470849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:07.031 [2024-12-16 20:09:14.470865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:07.031 [2024-12-16 20:09:14.470873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:07.031 [2024-12-16 20:09:14.470887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:07.031 [2024-12-16 20:09:14.470894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:07.031 [2024-12-16 20:09:14.470908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:07.031 [2024-12-16 20:09:14.470930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:07.031 [2024-12-16 20:09:14.470950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:07.031 [2024-12-16 20:09:14.470972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:07.031 [2024-12-16 20:09:14.470978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.031 [2024-12-16 20:09:14.470986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:07.031 [2024-12-16 20:09:14.470992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:07.031 [2024-12-16 20:09:14.471000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:07.031 [2024-12-16 20:09:14.471006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:07.031 [2024-12-16 20:09:14.471014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:07.031 [2024-12-16 20:09:14.471020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:07.031 [2024-12-16 20:09:14.471029] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:07.031 [2024-12-16 20:09:14.471036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:07.031 [2024-12-16 20:09:14.471044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:07.031 [2024-12-16 20:09:14.471052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.031 [2024-12-16 20:09:14.471062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:07.031 [2024-12-16 20:09:14.471069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:07.031 [2024-12-16 20:09:14.471076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:07.031 [2024-12-16 20:09:14.471083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:07.031 [2024-12-16 20:09:14.471092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:07.031 [2024-12-16 20:09:14.471099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:07.031 [2024-12-16 20:09:14.471108] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:07.031 [2024-12-16 20:09:14.471116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:07.031 [2024-12-16 20:09:14.471126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:07.031 [2024-12-16 20:09:14.471133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:07.031 [2024-12-16 20:09:14.471141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:07.031 [2024-12-16 20:09:14.471148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:07.031 [2024-12-16 20:09:14.471156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:07.031 [2024-12-16 20:09:14.471163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:07.031 [2024-12-16 20:09:14.471171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:07.031 [2024-12-16 20:09:14.471178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:07.031 [2024-12-16 20:09:14.471187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:07.031 [2024-12-16 20:09:14.471193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:07.031 [2024-12-16 20:09:14.471202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:07.031 [2024-12-16 20:09:14.471209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:07.031 [2024-12-16 20:09:14.471221] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:07.031 [2024-12-16 20:09:14.471227] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:07.031 [2024-12-16 20:09:14.471237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:07.031 [2024-12-16 20:09:14.471244] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:07.031 [2024-12-16 20:09:14.471253] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:07.031 [2024-12-16 20:09:14.471259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:07.031 [2024-12-16 20:09:14.471267] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:07.031 [2024-12-16 20:09:14.471275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.031 [2024-12-16 20:09:14.471283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:07.031 [2024-12-16 20:09:14.471290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:16:07.032 [2024-12-16 20:09:14.471307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.485867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.485903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:07.032 [2024-12-16 20:09:14.485913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.478 ms 00:16:07.032 [2024-12-16 20:09:14.485921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.486033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.486052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:07.032 [2024-12-16 20:09:14.486062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:07.032 [2024-12-16 20:09:14.486070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.517114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.517155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:07.032 [2024-12-16 20:09:14.517165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.016 ms 00:16:07.032 [2024-12-16 20:09:14.517174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.517232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.517242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:07.032 [2024-12-16 20:09:14.517250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:07.032 [2024-12-16 20:09:14.517263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.517593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.517617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:07.032 [2024-12-16 20:09:14.517626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:07.032 [2024-12-16 20:09:14.517635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.517746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.517763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:07.032 [2024-12-16 20:09:14.517771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:07.032 [2024-12-16 20:09:14.517779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.542800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.542847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:07.032 [2024-12-16 20:09:14.542863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.990 ms 00:16:07.032 [2024-12-16 20:09:14.542876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.555663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:07.032 [2024-12-16 20:09:14.569365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.569398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:07.032 [2024-12-16 20:09:14.569411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.361 ms 00:16:07.032 [2024-12-16 20:09:14.569419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.644990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.032 [2024-12-16 20:09:14.645035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:07.032 [2024-12-16 20:09:14.645050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.507 ms 00:16:07.032 [2024-12-16 20:09:14.645059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.032 [2024-12-16 20:09:14.645118] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:07.032 [2024-12-16 20:09:14.645131] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:09.579 [2024-12-16 20:09:17.209700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.579 [2024-12-16 20:09:17.209756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:09.579 [2024-12-16 20:09:17.209771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2564.569 ms 00:16:09.579 [2024-12-16 20:09:17.209778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.579 [2024-12-16 20:09:17.209965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.579 [2024-12-16 20:09:17.209977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:09.579 [2024-12-16 20:09:17.209986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:16:09.579 [2024-12-16 20:09:17.209992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.840 [2024-12-16 20:09:17.228341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.840 [2024-12-16 20:09:17.228370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:09.840 [2024-12-16 20:09:17.228380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.318 ms 00:16:09.840 [2024-12-16 20:09:17.228386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.840 [2024-12-16 20:09:17.245820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.840 [2024-12-16 20:09:17.245848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:09.840 [2024-12-16 20:09:17.245860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.378 ms 00:16:09.840 [2024-12-16 20:09:17.245865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.840 [2024-12-16 20:09:17.246137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.840 [2024-12-16 20:09:17.246150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:09.840 [2024-12-16 20:09:17.246158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:16:09.840 [2024-12-16 20:09:17.246165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.840 [2024-12-16 20:09:17.297159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.840 [2024-12-16 20:09:17.297186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:09.840 [2024-12-16 20:09:17.297197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.967 ms 00:16:09.840 [2024-12-16 20:09:17.297203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.840 [2024-12-16 20:09:17.316308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.840 [2024-12-16 20:09:17.316337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:09.840 [2024-12-16 20:09:17.316348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.037 ms 00:16:09.840 [2024-12-16 20:09:17.316354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.841 [2024-12-16 20:09:17.320222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.841 [2024-12-16 20:09:17.320253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:09.841 [2024-12-16 20:09:17.320264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.813 ms 00:16:09.841 [2024-12-16 20:09:17.320271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.841 [2024-12-16 20:09:17.338371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.841 [2024-12-16 20:09:17.338399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:09.841 [2024-12-16 20:09:17.338408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.047 ms 00:16:09.841 [2024-12-16 20:09:17.338414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.841 [2024-12-16 20:09:17.338472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.841 [2024-12-16 20:09:17.338479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:09.841 [2024-12-16 20:09:17.338488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:09.841 [2024-12-16 20:09:17.338494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.841 [2024-12-16 20:09:17.338570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.841 [2024-12-16 20:09:17.338589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:09.841 [2024-12-16 20:09:17.338596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:09.841 [2024-12-16 20:09:17.338602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.841 [2024-12-16 20:09:17.339327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:09.841 [2024-12-16 20:09:17.341746] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2883.109 ms, result 0 00:16:09.841 [2024-12-16 20:09:17.342436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:09.841 { 00:16:09.841 "name": "ftl0", 00:16:09.841 "uuid": "fa9ee28d-6188-48e1-a96b-343e6273784c" 00:16:09.841 } 00:16:09.841 20:09:17 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:09.841 20:09:17 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:09.841 20:09:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:09.841 20:09:17 -- common/autotest_common.sh@899 -- # local i 00:16:09.841 20:09:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:09.841 20:09:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:09.841 20:09:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:10.102 20:09:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:10.102 [ 00:16:10.102 { 00:16:10.102 "name": "ftl0", 00:16:10.102 "aliases": [ 00:16:10.102 "fa9ee28d-6188-48e1-a96b-343e6273784c" 00:16:10.102 ], 00:16:10.102 "product_name": "FTL disk", 00:16:10.102 "block_size": 4096, 00:16:10.102 "num_blocks": 23592960, 00:16:10.102 "uuid": "fa9ee28d-6188-48e1-a96b-343e6273784c", 00:16:10.102 "assigned_rate_limits": { 00:16:10.102 "rw_ios_per_sec": 0, 00:16:10.102 "rw_mbytes_per_sec": 0, 00:16:10.102 "r_mbytes_per_sec": 0, 00:16:10.102 "w_mbytes_per_sec": 0 00:16:10.102 }, 00:16:10.102 "claimed": false, 00:16:10.102 "zoned": false, 00:16:10.102 "supported_io_types": { 00:16:10.102 "read": true, 00:16:10.102 "write": true, 00:16:10.102 "unmap": true, 00:16:10.102 "write_zeroes": true, 00:16:10.102 "flush": true, 00:16:10.102 "reset": false, 00:16:10.102 "compare": false, 00:16:10.102 "compare_and_write": false, 00:16:10.102 "abort": false, 00:16:10.102 "nvme_admin": false, 00:16:10.102 "nvme_io": false 00:16:10.102 }, 00:16:10.102 "driver_specific": { 00:16:10.102 "ftl": { 00:16:10.102 "base_bdev": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:10.102 "cache": "nvc0n1p0" 00:16:10.102 } 00:16:10.102 } 00:16:10.102 } 00:16:10.102 ] 00:16:10.361 20:09:17 -- common/autotest_common.sh@905 -- # return 0 00:16:10.361 20:09:17 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:10.361 20:09:17 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:10.361 20:09:17 -- ftl/trim.sh@56 -- # echo ']}' 00:16:10.361 20:09:17 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:10.620 20:09:18 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:10.620 { 00:16:10.620 "name": "ftl0", 00:16:10.620 "aliases": [ 00:16:10.620 "fa9ee28d-6188-48e1-a96b-343e6273784c" 00:16:10.620 ], 00:16:10.620 "product_name": "FTL disk", 00:16:10.620 "block_size": 4096, 00:16:10.620 "num_blocks": 23592960, 00:16:10.620 "uuid": "fa9ee28d-6188-48e1-a96b-343e6273784c", 00:16:10.620 "assigned_rate_limits": { 00:16:10.620 "rw_ios_per_sec": 0, 00:16:10.620 "rw_mbytes_per_sec": 0, 00:16:10.620 "r_mbytes_per_sec": 0, 00:16:10.620 "w_mbytes_per_sec": 0 00:16:10.620 }, 00:16:10.620 "claimed": false, 00:16:10.620 "zoned": false, 00:16:10.620 "supported_io_types": { 00:16:10.620 "read": true, 00:16:10.620 "write": true, 00:16:10.620 "unmap": true, 00:16:10.620 "write_zeroes": true, 00:16:10.620 "flush": true, 00:16:10.620 "reset": false, 00:16:10.620 "compare": false, 00:16:10.620 "compare_and_write": false, 00:16:10.620 "abort": false, 00:16:10.620 "nvme_admin": false, 00:16:10.620 "nvme_io": false 00:16:10.620 }, 00:16:10.620 "driver_specific": { 00:16:10.620 "ftl": { 00:16:10.620 "base_bdev": "2faf1a23-430d-4d34-8dc9-4b6e4d01b0d0", 00:16:10.620 "cache": "nvc0n1p0" 00:16:10.620 } 00:16:10.620 } 00:16:10.620 } 00:16:10.620 ]' 00:16:10.620 20:09:18 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:10.620 20:09:18 -- ftl/trim.sh@60 -- # nb=23592960 00:16:10.620 20:09:18 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:10.879 [2024-12-16 20:09:18.311889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.312024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:10.879 [2024-12-16 20:09:18.312073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:10.879 [2024-12-16 20:09:18.312094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.312147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:10.879 [2024-12-16 20:09:18.314151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.314235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:10.879 [2024-12-16 20:09:18.314285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:16:10.879 [2024-12-16 20:09:18.314318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.314872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.314930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:10.879 [2024-12-16 20:09:18.314944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:16:10.879 [2024-12-16 20:09:18.314951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.317849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.317863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:10.879 [2024-12-16 20:09:18.317874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:16:10.879 [2024-12-16 20:09:18.317880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.323203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.323229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:10.879 [2024-12-16 20:09:18.323238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 00:16:10.879 [2024-12-16 20:09:18.323244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.341980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.342009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:10.879 [2024-12-16 20:09:18.342020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.645 ms 00:16:10.879 [2024-12-16 20:09:18.342026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.354713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.354741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:10.879 [2024-12-16 20:09:18.354751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.622 ms 00:16:10.879 [2024-12-16 20:09:18.354758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.879 [2024-12-16 20:09:18.354943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.879 [2024-12-16 20:09:18.354952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:10.880 [2024-12-16 20:09:18.354963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:16:10.880 [2024-12-16 20:09:18.354969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.880 [2024-12-16 20:09:18.373142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.880 [2024-12-16 20:09:18.373240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:10.880 [2024-12-16 20:09:18.373256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.142 ms 00:16:10.880 [2024-12-16 20:09:18.373261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.880 [2024-12-16 20:09:18.391083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.880 [2024-12-16 20:09:18.391108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:10.880 [2024-12-16 20:09:18.391117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:16:10.880 [2024-12-16 20:09:18.391123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.880 [2024-12-16 20:09:18.408572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.880 [2024-12-16 20:09:18.408598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:10.880 [2024-12-16 20:09:18.408608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.400 ms 00:16:10.880 [2024-12-16 20:09:18.408614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.880 [2024-12-16 20:09:18.426257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.880 [2024-12-16 20:09:18.426283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:10.880 [2024-12-16 20:09:18.426294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.552 ms 00:16:10.880 [2024-12-16 20:09:18.426314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.880 [2024-12-16 20:09:18.426371] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:10.880 [2024-12-16 20:09:18.426383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:10.880 [2024-12-16 20:09:18.426881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.426994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:10.881 [2024-12-16 20:09:18.427044] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:10.881 [2024-12-16 20:09:18.427051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:16:10.881 [2024-12-16 20:09:18.427057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:10.881 [2024-12-16 20:09:18.427063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:10.881 [2024-12-16 20:09:18.427069] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:10.881 [2024-12-16 20:09:18.427075] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:10.881 [2024-12-16 20:09:18.427081] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:10.881 [2024-12-16 20:09:18.427088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:10.881 [2024-12-16 20:09:18.427093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:10.881 [2024-12-16 20:09:18.427100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:10.881 [2024-12-16 20:09:18.427105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:10.881 [2024-12-16 20:09:18.427112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.881 [2024-12-16 20:09:18.427119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:10.881 [2024-12-16 20:09:18.427127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:16:10.881 [2024-12-16 20:09:18.427132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.881 [2024-12-16 20:09:18.437019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.881 [2024-12-16 20:09:18.437045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:10.881 [2024-12-16 20:09:18.437054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.862 ms 00:16:10.881 [2024-12-16 20:09:18.437060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.881 [2024-12-16 20:09:18.437244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.881 [2024-12-16 20:09:18.437251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:10.881 [2024-12-16 20:09:18.437259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:10.881 [2024-12-16 20:09:18.437264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.881 [2024-12-16 20:09:18.472606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.881 [2024-12-16 20:09:18.472634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.881 [2024-12-16 20:09:18.472646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.881 [2024-12-16 20:09:18.472652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.881 [2024-12-16 20:09:18.472731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.881 [2024-12-16 20:09:18.472737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.881 [2024-12-16 20:09:18.472745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.881 [2024-12-16 20:09:18.472751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.881 [2024-12-16 20:09:18.472802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.881 [2024-12-16 20:09:18.472809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.881 [2024-12-16 20:09:18.472817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.881 [2024-12-16 20:09:18.472823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.881 [2024-12-16 20:09:18.472847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.881 [2024-12-16 20:09:18.472855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.881 [2024-12-16 20:09:18.472862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.881 [2024-12-16 20:09:18.472867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.539321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.539364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.139 [2024-12-16 20:09:18.539377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.539384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.562545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.562574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.139 [2024-12-16 20:09:18.562583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.562590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.562655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.562663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.139 [2024-12-16 20:09:18.562671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.562676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.562729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.562736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.139 [2024-12-16 20:09:18.562744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.562762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.562847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.562855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.139 [2024-12-16 20:09:18.562863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.562870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.562918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.562925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:11.139 [2024-12-16 20:09:18.562934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.562939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.562986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.562996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.139 [2024-12-16 20:09:18.563003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.563008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.563064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.139 [2024-12-16 20:09:18.563071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.139 [2024-12-16 20:09:18.563079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.139 [2024-12-16 20:09:18.563085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.139 [2024-12-16 20:09:18.563237] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 251.329 ms, result 0 00:16:11.139 true 00:16:11.139 20:09:18 -- ftl/trim.sh@63 -- # killprocess 71752 00:16:11.139 20:09:18 -- common/autotest_common.sh@936 -- # '[' -z 71752 ']' 00:16:11.139 20:09:18 -- common/autotest_common.sh@940 -- # kill -0 71752 00:16:11.139 20:09:18 -- common/autotest_common.sh@941 -- # uname 00:16:11.139 20:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.139 20:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71752 00:16:11.139 killing process with pid 71752 00:16:11.139 20:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.139 20:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.139 20:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71752' 00:16:11.139 20:09:18 -- common/autotest_common.sh@955 -- # kill 71752 00:16:11.139 20:09:18 -- common/autotest_common.sh@960 -- # wait 71752 00:16:17.738 20:09:24 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:17.738 65536+0 records in 00:16:17.738 65536+0 records out 00:16:17.738 268435456 bytes (268 MB, 256 MiB) copied, 1.07353 s, 250 MB/s 00:16:17.738 20:09:25 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:17.738 [2024-12-16 20:09:25.324664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.738 [2024-12-16 20:09:25.324934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71956 ] 00:16:17.999 [2024-12-16 20:09:25.474137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.260 [2024-12-16 20:09:25.698718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.521 [2024-12-16 20:09:25.983802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:18.522 [2024-12-16 20:09:25.983882] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:18.522 [2024-12-16 20:09:26.144278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.522 [2024-12-16 20:09:26.144359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:18.522 [2024-12-16 20:09:26.144376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:18.522 [2024-12-16 20:09:26.144385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.522 [2024-12-16 20:09:26.147449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.522 [2024-12-16 20:09:26.147503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:18.522 [2024-12-16 20:09:26.147514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:16:18.522 [2024-12-16 20:09:26.147522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.522 [2024-12-16 20:09:26.147639] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:18.522 [2024-12-16 20:09:26.148439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:18.522 [2024-12-16 20:09:26.148463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.522 [2024-12-16 20:09:26.148473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:18.522 [2024-12-16 20:09:26.148482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:16:18.522 [2024-12-16 20:09:26.148490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.522 [2024-12-16 20:09:26.150725] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:18.784 [2024-12-16 20:09:26.172696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.172758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:18.784 [2024-12-16 20:09:26.172776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.973 ms 00:16:18.784 [2024-12-16 20:09:26.172785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.172971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.172987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:18.784 [2024-12-16 20:09:26.172997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:18.784 [2024-12-16 20:09:26.173006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.184515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.184557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:18.784 [2024-12-16 20:09:26.184571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.449 ms 00:16:18.784 [2024-12-16 20:09:26.184586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.184743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.184758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:18.784 [2024-12-16 20:09:26.184768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:18.784 [2024-12-16 20:09:26.184776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.184810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.184821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:18.784 [2024-12-16 20:09:26.184829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:18.784 [2024-12-16 20:09:26.184837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.184878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:18.784 [2024-12-16 20:09:26.189663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.189705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:18.784 [2024-12-16 20:09:26.189717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:16:18.784 [2024-12-16 20:09:26.189729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.189799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.189809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:18.784 [2024-12-16 20:09:26.189819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:18.784 [2024-12-16 20:09:26.189827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.189851] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:18.784 [2024-12-16 20:09:26.189878] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:18.784 [2024-12-16 20:09:26.189919] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:18.784 [2024-12-16 20:09:26.189939] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:18.784 [2024-12-16 20:09:26.190022] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:18.784 [2024-12-16 20:09:26.190036] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:18.784 [2024-12-16 20:09:26.190047] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:18.784 [2024-12-16 20:09:26.190058] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:18.784 [2024-12-16 20:09:26.190068] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:18.784 [2024-12-16 20:09:26.190076] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:18.784 [2024-12-16 20:09:26.190085] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:18.784 [2024-12-16 20:09:26.190093] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:18.784 [2024-12-16 20:09:26.190105] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:18.784 [2024-12-16 20:09:26.190114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.190122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:18.784 [2024-12-16 20:09:26.190130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:18.784 [2024-12-16 20:09:26.190139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.190208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.784 [2024-12-16 20:09:26.190219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:18.784 [2024-12-16 20:09:26.190227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:18.784 [2024-12-16 20:09:26.190237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.784 [2024-12-16 20:09:26.190348] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:18.784 [2024-12-16 20:09:26.190362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:18.784 [2024-12-16 20:09:26.190371] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.784 [2024-12-16 20:09:26.190380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.784 [2024-12-16 20:09:26.190388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:18.784 [2024-12-16 20:09:26.190395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:18.784 [2024-12-16 20:09:26.190402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:18.784 [2024-12-16 20:09:26.190413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:18.784 [2024-12-16 20:09:26.190421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:18.784 [2024-12-16 20:09:26.190429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.784 [2024-12-16 20:09:26.190436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:18.784 [2024-12-16 20:09:26.190443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:18.784 [2024-12-16 20:09:26.190455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.784 [2024-12-16 20:09:26.190464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:18.785 [2024-12-16 20:09:26.190484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:18.785 [2024-12-16 20:09:26.190491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:18.785 [2024-12-16 20:09:26.190504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:18.785 [2024-12-16 20:09:26.190511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:18.785 [2024-12-16 20:09:26.190526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:18.785 [2024-12-16 20:09:26.190534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:18.785 [2024-12-16 20:09:26.190549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:18.785 [2024-12-16 20:09:26.190569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:18.785 [2024-12-16 20:09:26.190591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:18.785 [2024-12-16 20:09:26.190612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:18.785 [2024-12-16 20:09:26.190646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.785 [2024-12-16 20:09:26.190660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:18.785 [2024-12-16 20:09:26.190667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:18.785 [2024-12-16 20:09:26.190674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.785 [2024-12-16 20:09:26.190680] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:18.785 [2024-12-16 20:09:26.190688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:18.785 [2024-12-16 20:09:26.190695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.785 [2024-12-16 20:09:26.190720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:18.785 [2024-12-16 20:09:26.190728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:18.785 [2024-12-16 20:09:26.190735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:18.785 [2024-12-16 20:09:26.190743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:18.785 [2024-12-16 20:09:26.190750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:18.785 [2024-12-16 20:09:26.190759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:18.785 [2024-12-16 20:09:26.190768] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:18.785 [2024-12-16 20:09:26.190779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.785 [2024-12-16 20:09:26.190788] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:18.785 [2024-12-16 20:09:26.190795] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:18.785 [2024-12-16 20:09:26.190803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:18.785 [2024-12-16 20:09:26.190810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:18.785 [2024-12-16 20:09:26.190819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:18.785 [2024-12-16 20:09:26.190826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:18.785 [2024-12-16 20:09:26.190833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:18.785 [2024-12-16 20:09:26.190840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:18.785 [2024-12-16 20:09:26.190847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:18.785 [2024-12-16 20:09:26.190855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:18.785 [2024-12-16 20:09:26.190862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:18.785 [2024-12-16 20:09:26.190869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:18.785 [2024-12-16 20:09:26.190881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:18.785 [2024-12-16 20:09:26.190890] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:18.785 [2024-12-16 20:09:26.190904] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.785 [2024-12-16 20:09:26.190912] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:18.785 [2024-12-16 20:09:26.190919] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:18.785 [2024-12-16 20:09:26.190927] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:18.785 [2024-12-16 20:09:26.190935] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:18.785 [2024-12-16 20:09:26.190944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.190953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:18.785 [2024-12-16 20:09:26.190961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:16:18.785 [2024-12-16 20:09:26.190968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.212702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.213012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:18.785 [2024-12-16 20:09:26.213033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.685 ms 00:16:18.785 [2024-12-16 20:09:26.213042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.213195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.213206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:18.785 [2024-12-16 20:09:26.213215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:18.785 [2024-12-16 20:09:26.213224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.265399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.265451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:18.785 [2024-12-16 20:09:26.265464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.149 ms 00:16:18.785 [2024-12-16 20:09:26.265475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.265569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.265582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:18.785 [2024-12-16 20:09:26.265596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.785 [2024-12-16 20:09:26.265604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.266283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.266345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:18.785 [2024-12-16 20:09:26.266356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:16:18.785 [2024-12-16 20:09:26.266366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.266539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.266550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:18.785 [2024-12-16 20:09:26.266559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:18.785 [2024-12-16 20:09:26.266568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.286561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.286604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:18.785 [2024-12-16 20:09:26.286616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.961 ms 00:16:18.785 [2024-12-16 20:09:26.286629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.302154] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:18.785 [2024-12-16 20:09:26.302455] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:18.785 [2024-12-16 20:09:26.302477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.302487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:18.785 [2024-12-16 20:09:26.302498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.715 ms 00:16:18.785 [2024-12-16 20:09:26.302507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.785 [2024-12-16 20:09:26.329436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.785 [2024-12-16 20:09:26.329631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:18.785 [2024-12-16 20:09:26.329661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.721 ms 00:16:18.786 [2024-12-16 20:09:26.329670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.786 [2024-12-16 20:09:26.343109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.786 [2024-12-16 20:09:26.343153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:18.786 [2024-12-16 20:09:26.343165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.252 ms 00:16:18.786 [2024-12-16 20:09:26.343187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.786 [2024-12-16 20:09:26.355653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.786 [2024-12-16 20:09:26.355698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:18.786 [2024-12-16 20:09:26.355710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.380 ms 00:16:18.786 [2024-12-16 20:09:26.355718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.786 [2024-12-16 20:09:26.356140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.786 [2024-12-16 20:09:26.356163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:18.786 [2024-12-16 20:09:26.356173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:16:18.786 [2024-12-16 20:09:26.356182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.047 [2024-12-16 20:09:26.429451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.047 [2024-12-16 20:09:26.429506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:19.047 [2024-12-16 20:09:26.429522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.242 ms 00:16:19.047 [2024-12-16 20:09:26.429532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.047 [2024-12-16 20:09:26.441537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:19.047 [2024-12-16 20:09:26.465376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.047 [2024-12-16 20:09:26.465424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:19.047 [2024-12-16 20:09:26.465438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.733 ms 00:16:19.047 [2024-12-16 20:09:26.465447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.047 [2024-12-16 20:09:26.465542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.047 [2024-12-16 20:09:26.465554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:19.047 [2024-12-16 20:09:26.465563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:19.047 [2024-12-16 20:09:26.465576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.047 [2024-12-16 20:09:26.465645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.047 [2024-12-16 20:09:26.465660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:19.047 [2024-12-16 20:09:26.465669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:19.047 [2024-12-16 20:09:26.465678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.047 [2024-12-16 20:09:26.467242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.047 [2024-12-16 20:09:26.467280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:19.047 [2024-12-16 20:09:26.467291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:16:19.047 [2024-12-16 20:09:26.467318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.047 [2024-12-16 20:09:26.467391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.047 [2024-12-16 20:09:26.467402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:19.047 [2024-12-16 20:09:26.467417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:19.048 [2024-12-16 20:09:26.467425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.048 [2024-12-16 20:09:26.467471] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:19.048 [2024-12-16 20:09:26.467481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.048 [2024-12-16 20:09:26.467490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:19.048 [2024-12-16 20:09:26.467499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:19.048 [2024-12-16 20:09:26.467508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.048 [2024-12-16 20:09:26.494505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.048 [2024-12-16 20:09:26.494564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:19.048 [2024-12-16 20:09:26.494578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.969 ms 00:16:19.048 [2024-12-16 20:09:26.494588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.048 [2024-12-16 20:09:26.494704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.048 [2024-12-16 20:09:26.494717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:19.048 [2024-12-16 20:09:26.494727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:19.048 [2024-12-16 20:09:26.494736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.048 [2024-12-16 20:09:26.496073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:19.048 [2024-12-16 20:09:26.499662] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.417 ms, result 0 00:16:19.048 [2024-12-16 20:09:26.500898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:19.048 [2024-12-16 20:09:26.515188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:19.991  [2024-12-16T20:09:28.568Z] Copying: 17/256 [MB] (17 MBps) [2024-12-16T20:09:29.946Z] Copying: 37/256 [MB] (19 MBps) [2024-12-16T20:09:30.519Z] Copying: 58/256 [MB] (21 MBps) [2024-12-16T20:09:31.905Z] Copying: 73/256 [MB] (15 MBps) [2024-12-16T20:09:32.850Z] Copying: 120/256 [MB] (46 MBps) [2024-12-16T20:09:33.792Z] Copying: 164/256 [MB] (43 MBps) [2024-12-16T20:09:34.736Z] Copying: 177/256 [MB] (13 MBps) [2024-12-16T20:09:35.677Z] Copying: 191768/262144 [kB] (10212 kBps) [2024-12-16T20:09:36.250Z] Copying: 227/256 [MB] (40 MBps) [2024-12-16T20:09:36.250Z] Copying: 256/256 [MB] (average 26 MBps)[2024-12-16 20:09:36.075551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:28.610 [2024-12-16 20:09:36.082980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.610 [2024-12-16 20:09:36.083081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:28.611 [2024-12-16 20:09:36.083140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:28.611 [2024-12-16 20:09:36.083158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.083188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:28.611 [2024-12-16 20:09:36.085212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.085330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:28.611 [2024-12-16 20:09:36.085390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:16:28.611 [2024-12-16 20:09:36.085409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.086874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.086958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:28.611 [2024-12-16 20:09:36.087004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:16:28.611 [2024-12-16 20:09:36.087022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.092437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.092524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:28.611 [2024-12-16 20:09:36.092569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.376 ms 00:16:28.611 [2024-12-16 20:09:36.092587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.097901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.097986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:28.611 [2024-12-16 20:09:36.098027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.274 ms 00:16:28.611 [2024-12-16 20:09:36.098068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.115994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.116083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:28.611 [2024-12-16 20:09:36.116127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.862 ms 00:16:28.611 [2024-12-16 20:09:36.116144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.127619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.127707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:28.611 [2024-12-16 20:09:36.127745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.435 ms 00:16:28.611 [2024-12-16 20:09:36.127761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.127868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.127892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:28.611 [2024-12-16 20:09:36.127908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:28.611 [2024-12-16 20:09:36.127921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.146113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.146206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:28.611 [2024-12-16 20:09:36.146243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.171 ms 00:16:28.611 [2024-12-16 20:09:36.146259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.163811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.163895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:28.611 [2024-12-16 20:09:36.163931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.493 ms 00:16:28.611 [2024-12-16 20:09:36.163947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.181207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.181289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:28.611 [2024-12-16 20:09:36.181349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.221 ms 00:16:28.611 [2024-12-16 20:09:36.181365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.198689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.611 [2024-12-16 20:09:36.198770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:28.611 [2024-12-16 20:09:36.198809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.265 ms 00:16:28.611 [2024-12-16 20:09:36.198825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.611 [2024-12-16 20:09:36.198864] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:28.611 [2024-12-16 20:09:36.198899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.198942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.198987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.199999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:28.611 [2024-12-16 20:09:36.200517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:28.612 [2024-12-16 20:09:36.200898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:28.612 [2024-12-16 20:09:36.200904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:16:28.612 [2024-12-16 20:09:36.200910] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:28.612 [2024-12-16 20:09:36.200915] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:28.612 [2024-12-16 20:09:36.200921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:28.612 [2024-12-16 20:09:36.200926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:28.612 [2024-12-16 20:09:36.200932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:28.612 [2024-12-16 20:09:36.200937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:28.612 [2024-12-16 20:09:36.200945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:28.612 [2024-12-16 20:09:36.200951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:28.612 [2024-12-16 20:09:36.200956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:28.612 [2024-12-16 20:09:36.200961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.612 [2024-12-16 20:09:36.200967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:28.612 [2024-12-16 20:09:36.200973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:16:28.612 [2024-12-16 20:09:36.200979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.612 [2024-12-16 20:09:36.210623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.612 [2024-12-16 20:09:36.210705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:28.612 [2024-12-16 20:09:36.210715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.627 ms 00:16:28.612 [2024-12-16 20:09:36.210725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.612 [2024-12-16 20:09:36.210891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.612 [2024-12-16 20:09:36.210898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:28.612 [2024-12-16 20:09:36.210904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:28.612 [2024-12-16 20:09:36.210909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.612 [2024-12-16 20:09:36.240203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.612 [2024-12-16 20:09:36.240226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:28.612 [2024-12-16 20:09:36.240234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.612 [2024-12-16 20:09:36.240243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.612 [2024-12-16 20:09:36.240316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.612 [2024-12-16 20:09:36.240323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:28.612 [2024-12-16 20:09:36.240329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.612 [2024-12-16 20:09:36.240335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.612 [2024-12-16 20:09:36.240365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.612 [2024-12-16 20:09:36.240372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:28.612 [2024-12-16 20:09:36.240378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.612 [2024-12-16 20:09:36.240383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.612 [2024-12-16 20:09:36.240399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.612 [2024-12-16 20:09:36.240406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:28.612 [2024-12-16 20:09:36.240411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.612 [2024-12-16 20:09:36.240416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.297152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.297184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:28.874 [2024-12-16 20:09:36.297192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.297201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:28.874 [2024-12-16 20:09:36.320151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:28.874 [2024-12-16 20:09:36.320209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:28.874 [2024-12-16 20:09:36.320252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:28.874 [2024-12-16 20:09:36.320362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:28.874 [2024-12-16 20:09:36.320406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:28.874 [2024-12-16 20:09:36.320450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.874 [2024-12-16 20:09:36.320498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:28.874 [2024-12-16 20:09:36.320505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.874 [2024-12-16 20:09:36.320511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.874 [2024-12-16 20:09:36.320617] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 237.639 ms, result 0 00:16:29.818 00:16:29.818 00:16:29.818 20:09:37 -- ftl/trim.sh@72 -- # svcpid=72081 00:16:29.818 20:09:37 -- ftl/trim.sh@73 -- # waitforlisten 72081 00:16:29.818 20:09:37 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:29.818 20:09:37 -- common/autotest_common.sh@829 -- # '[' -z 72081 ']' 00:16:29.818 20:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.818 20:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.818 20:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.818 20:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.818 20:09:37 -- common/autotest_common.sh@10 -- # set +x 00:16:29.818 [2024-12-16 20:09:37.289564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:29.818 [2024-12-16 20:09:37.290184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72081 ] 00:16:29.818 [2024-12-16 20:09:37.448564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.080 [2024-12-16 20:09:37.594910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.080 [2024-12-16 20:09:37.595061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.651 20:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.651 20:09:38 -- common/autotest_common.sh@862 -- # return 0 00:16:30.651 20:09:38 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:30.914 [2024-12-16 20:09:38.299220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:30.914 [2024-12-16 20:09:38.299424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:30.914 [2024-12-16 20:09:38.456286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.456434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:30.914 [2024-12-16 20:09:38.456494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.914 [2024-12-16 20:09:38.456514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.458555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.458656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:30.914 [2024-12-16 20:09:38.458775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:16:30.914 [2024-12-16 20:09:38.458801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.458952] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:30.914 [2024-12-16 20:09:38.459547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:30.914 [2024-12-16 20:09:38.459630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.459638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:30.914 [2024-12-16 20:09:38.459646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:16:30.914 [2024-12-16 20:09:38.459652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.460714] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:30.914 [2024-12-16 20:09:38.470414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.470516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:30.914 [2024-12-16 20:09:38.470565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.705 ms 00:16:30.914 [2024-12-16 20:09:38.470585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.470651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.470806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:30.914 [2024-12-16 20:09:38.470826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:30.914 [2024-12-16 20:09:38.470842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.475277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.475394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:30.914 [2024-12-16 20:09:38.475442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:16:30.914 [2024-12-16 20:09:38.475461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.475534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.475554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:30.914 [2024-12-16 20:09:38.475601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:30.914 [2024-12-16 20:09:38.475622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.475651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.475668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:30.914 [2024-12-16 20:09:38.475730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:30.914 [2024-12-16 20:09:38.475746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.475800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:30.914 [2024-12-16 20:09:38.478559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.478582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:30.914 [2024-12-16 20:09:38.478590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:16:30.914 [2024-12-16 20:09:38.478596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.478629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.478635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:30.914 [2024-12-16 20:09:38.478644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:30.914 [2024-12-16 20:09:38.478649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.478666] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:30.914 [2024-12-16 20:09:38.478680] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:30.914 [2024-12-16 20:09:38.478707] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:30.914 [2024-12-16 20:09:38.478718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:30.914 [2024-12-16 20:09:38.478775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:30.914 [2024-12-16 20:09:38.478786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:30.914 [2024-12-16 20:09:38.478794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:30.914 [2024-12-16 20:09:38.478802] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:30.914 [2024-12-16 20:09:38.478810] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:30.914 [2024-12-16 20:09:38.478816] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:30.914 [2024-12-16 20:09:38.478823] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:30.914 [2024-12-16 20:09:38.478829] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:30.914 [2024-12-16 20:09:38.478837] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:30.914 [2024-12-16 20:09:38.478843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.478850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:30.914 [2024-12-16 20:09:38.478856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:16:30.914 [2024-12-16 20:09:38.478864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.478913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.914 [2024-12-16 20:09:38.478920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:30.914 [2024-12-16 20:09:38.478925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:30.914 [2024-12-16 20:09:38.478931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.914 [2024-12-16 20:09:38.478989] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:30.914 [2024-12-16 20:09:38.478997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:30.914 [2024-12-16 20:09:38.479003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:30.914 [2024-12-16 20:09:38.479010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.914 [2024-12-16 20:09:38.479017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:30.914 [2024-12-16 20:09:38.479024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:30.915 [2024-12-16 20:09:38.479043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:30.915 [2024-12-16 20:09:38.479055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:30.915 [2024-12-16 20:09:38.479061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:30.915 [2024-12-16 20:09:38.479066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:30.915 [2024-12-16 20:09:38.479073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:30.915 [2024-12-16 20:09:38.479078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:30.915 [2024-12-16 20:09:38.479084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:30.915 [2024-12-16 20:09:38.479096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:30.915 [2024-12-16 20:09:38.479101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:30.915 [2024-12-16 20:09:38.479112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:30.915 [2024-12-16 20:09:38.479118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:30.915 [2024-12-16 20:09:38.479130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:30.915 [2024-12-16 20:09:38.479151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:30.915 [2024-12-16 20:09:38.479168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:30.915 [2024-12-16 20:09:38.479184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:30.915 [2024-12-16 20:09:38.479201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:30.915 [2024-12-16 20:09:38.479212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:30.915 [2024-12-16 20:09:38.479217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:30.915 [2024-12-16 20:09:38.479225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:30.915 [2024-12-16 20:09:38.479229] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:30.915 [2024-12-16 20:09:38.479236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:30.915 [2024-12-16 20:09:38.479241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:30.915 [2024-12-16 20:09:38.479254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:30.915 [2024-12-16 20:09:38.479261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:30.915 [2024-12-16 20:09:38.479266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:30.915 [2024-12-16 20:09:38.479273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:30.915 [2024-12-16 20:09:38.479277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:30.915 [2024-12-16 20:09:38.479284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:30.915 [2024-12-16 20:09:38.479289] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:30.915 [2024-12-16 20:09:38.479469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:30.915 [2024-12-16 20:09:38.479503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:30.915 [2024-12-16 20:09:38.479528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:30.915 [2024-12-16 20:09:38.479550] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:30.915 [2024-12-16 20:09:38.479575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:30.915 [2024-12-16 20:09:38.479629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:30.915 [2024-12-16 20:09:38.479654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:30.915 [2024-12-16 20:09:38.479676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:30.915 [2024-12-16 20:09:38.479699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:30.915 [2024-12-16 20:09:38.479721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:30.915 [2024-12-16 20:09:38.479787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:30.915 [2024-12-16 20:09:38.479843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:30.915 [2024-12-16 20:09:38.479868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:30.915 [2024-12-16 20:09:38.479891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:30.915 [2024-12-16 20:09:38.479913] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:30.915 [2024-12-16 20:09:38.479937] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:30.915 [2024-12-16 20:09:38.480067] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:30.915 [2024-12-16 20:09:38.480090] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:30.915 [2024-12-16 20:09:38.480113] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:30.915 [2024-12-16 20:09:38.480135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:30.915 [2024-12-16 20:09:38.480160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.480175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:30.915 [2024-12-16 20:09:38.480191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:16:30.915 [2024-12-16 20:09:38.480237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.492154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.492244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.915 [2024-12-16 20:09:38.492257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.837 ms 00:16:30.915 [2024-12-16 20:09:38.492265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.492384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.492393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:30.915 [2024-12-16 20:09:38.492400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:30.915 [2024-12-16 20:09:38.492406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.516500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.516527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.915 [2024-12-16 20:09:38.516537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.077 ms 00:16:30.915 [2024-12-16 20:09:38.516544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.516594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.516602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.915 [2024-12-16 20:09:38.516610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.915 [2024-12-16 20:09:38.516617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.516899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.516911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.915 [2024-12-16 20:09:38.516920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:30.915 [2024-12-16 20:09:38.516925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.517017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.517025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.915 [2024-12-16 20:09:38.517032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:30.915 [2024-12-16 20:09:38.517037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.528911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.528938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.915 [2024-12-16 20:09:38.528947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.857 ms 00:16:30.915 [2024-12-16 20:09:38.528953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.915 [2024-12-16 20:09:38.538474] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:30.915 [2024-12-16 20:09:38.538584] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:30.915 [2024-12-16 20:09:38.538598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.915 [2024-12-16 20:09:38.538604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:30.915 [2024-12-16 20:09:38.538613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.565 ms 00:16:30.916 [2024-12-16 20:09:38.538618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.557187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.557213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:31.177 [2024-12-16 20:09:38.557224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.527 ms 00:16:31.177 [2024-12-16 20:09:38.557230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.566028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.566058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:31.177 [2024-12-16 20:09:38.566067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.745 ms 00:16:31.177 [2024-12-16 20:09:38.566073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.574821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.574846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:31.177 [2024-12-16 20:09:38.574857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.705 ms 00:16:31.177 [2024-12-16 20:09:38.574862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.575131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.575146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.177 [2024-12-16 20:09:38.575154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:16:31.177 [2024-12-16 20:09:38.575160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.619539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.619572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:31.177 [2024-12-16 20:09:38.619583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.359 ms 00:16:31.177 [2024-12-16 20:09:38.619590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.627783] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:31.177 [2024-12-16 20:09:38.639258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.639291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.177 [2024-12-16 20:09:38.639313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.612 ms 00:16:31.177 [2024-12-16 20:09:38.639321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.639375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.639386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:31.177 [2024-12-16 20:09:38.639395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.177 [2024-12-16 20:09:38.639402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.639438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.639446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.177 [2024-12-16 20:09:38.639452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:31.177 [2024-12-16 20:09:38.639459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.640388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.640494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:31.177 [2024-12-16 20:09:38.640506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:16:31.177 [2024-12-16 20:09:38.640514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.640540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.640548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.177 [2024-12-16 20:09:38.640554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:31.177 [2024-12-16 20:09:38.640560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.640586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:31.177 [2024-12-16 20:09:38.640596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.640602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:31.177 [2024-12-16 20:09:38.640610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:31.177 [2024-12-16 20:09:38.640615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.658396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.658422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.177 [2024-12-16 20:09:38.658432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.759 ms 00:16:31.177 [2024-12-16 20:09:38.658438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.177 [2024-12-16 20:09:38.658504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.177 [2024-12-16 20:09:38.658512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.177 [2024-12-16 20:09:38.658519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:31.178 [2024-12-16 20:09:38.658527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.178 [2024-12-16 20:09:38.659131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.178 [2024-12-16 20:09:38.661574] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 202.631 ms, result 0 00:16:31.178 [2024-12-16 20:09:38.662231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.178 Some configs were skipped because the RPC state that can call them passed over. 00:16:31.178 20:09:38 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:31.439 [2024-12-16 20:09:38.895833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.439 [2024-12-16 20:09:38.895949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:31.439 [2024-12-16 20:09:38.895992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.711 ms 00:16:31.439 [2024-12-16 20:09:38.896014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.439 [2024-12-16 20:09:38.896054] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.931 ms, result 0 00:16:31.439 true 00:16:31.439 20:09:38 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:31.700 [2024-12-16 20:09:39.098359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.700 [2024-12-16 20:09:39.098473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:31.700 [2024-12-16 20:09:39.098517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.261 ms 00:16:31.700 [2024-12-16 20:09:39.098535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.700 [2024-12-16 20:09:39.098576] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.477 ms, result 0 00:16:31.700 true 00:16:31.700 20:09:39 -- ftl/trim.sh@81 -- # killprocess 72081 00:16:31.700 20:09:39 -- common/autotest_common.sh@936 -- # '[' -z 72081 ']' 00:16:31.700 20:09:39 -- common/autotest_common.sh@940 -- # kill -0 72081 00:16:31.700 20:09:39 -- common/autotest_common.sh@941 -- # uname 00:16:31.700 20:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.700 20:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72081 00:16:31.700 killing process with pid 72081 00:16:31.700 20:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:31.700 20:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:31.700 20:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72081' 00:16:31.700 20:09:39 -- common/autotest_common.sh@955 -- # kill 72081 00:16:31.700 20:09:39 -- common/autotest_common.sh@960 -- # wait 72081 00:16:32.274 [2024-12-16 20:09:39.678580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.678627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:32.274 [2024-12-16 20:09:39.678638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:32.274 [2024-12-16 20:09:39.678647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.678664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:32.274 [2024-12-16 20:09:39.680701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.680726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:32.274 [2024-12-16 20:09:39.680737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:16:32.274 [2024-12-16 20:09:39.680744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.680990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.680998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:32.274 [2024-12-16 20:09:39.681005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:16:32.274 [2024-12-16 20:09:39.681011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.684259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.684285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.274 [2024-12-16 20:09:39.684294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:16:32.274 [2024-12-16 20:09:39.684306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.689567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.689684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:32.274 [2024-12-16 20:09:39.689699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.232 ms 00:16:32.274 [2024-12-16 20:09:39.689706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.697404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.697430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.274 [2024-12-16 20:09:39.697440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.636 ms 00:16:32.274 [2024-12-16 20:09:39.697446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.703899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.703925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.274 [2024-12-16 20:09:39.703934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.421 ms 00:16:32.274 [2024-12-16 20:09:39.703941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.704049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.704057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.274 [2024-12-16 20:09:39.704065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:32.274 [2024-12-16 20:09:39.704070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.712099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.712123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:32.274 [2024-12-16 20:09:39.712131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.011 ms 00:16:32.274 [2024-12-16 20:09:39.712136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.274 [2024-12-16 20:09:39.719421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.274 [2024-12-16 20:09:39.719445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:32.274 [2024-12-16 20:09:39.719456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.254 ms 00:16:32.275 [2024-12-16 20:09:39.719461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.275 [2024-12-16 20:09:39.726460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.275 [2024-12-16 20:09:39.726484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.275 [2024-12-16 20:09:39.726492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:16:32.275 [2024-12-16 20:09:39.726497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.275 [2024-12-16 20:09:39.733673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.275 [2024-12-16 20:09:39.733697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.275 [2024-12-16 20:09:39.733705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.126 ms 00:16:32.275 [2024-12-16 20:09:39.733710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.275 [2024-12-16 20:09:39.733738] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.275 [2024-12-16 20:09:39.733751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.733998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.275 [2024-12-16 20:09:39.734280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.276 [2024-12-16 20:09:39.734428] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.276 [2024-12-16 20:09:39.734437] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:16:32.276 [2024-12-16 20:09:39.734443] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.276 [2024-12-16 20:09:39.734450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.276 [2024-12-16 20:09:39.734455] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.276 [2024-12-16 20:09:39.734462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.276 [2024-12-16 20:09:39.734468] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.276 [2024-12-16 20:09:39.734475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.276 [2024-12-16 20:09:39.734481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.276 [2024-12-16 20:09:39.734487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.276 [2024-12-16 20:09:39.734491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.276 [2024-12-16 20:09:39.734498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.276 [2024-12-16 20:09:39.734504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.276 [2024-12-16 20:09:39.734513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:16:32.276 [2024-12-16 20:09:39.734519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.744141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.276 [2024-12-16 20:09:39.744165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.276 [2024-12-16 20:09:39.744176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.605 ms 00:16:32.276 [2024-12-16 20:09:39.744182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.744363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.276 [2024-12-16 20:09:39.744372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.276 [2024-12-16 20:09:39.744381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:32.276 [2024-12-16 20:09:39.744387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.779585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.779610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.276 [2024-12-16 20:09:39.779619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.779626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.779691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.779700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.276 [2024-12-16 20:09:39.779708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.779713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.779746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.779753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.276 [2024-12-16 20:09:39.779762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.779767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.779782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.779788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.276 [2024-12-16 20:09:39.779797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.779802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.841102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.841134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.276 [2024-12-16 20:09:39.841145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.841151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.863972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.276 [2024-12-16 20:09:39.864106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.276 [2024-12-16 20:09:39.864172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.276 [2024-12-16 20:09:39.864218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.276 [2024-12-16 20:09:39.864332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.276 [2024-12-16 20:09:39.864379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.276 [2024-12-16 20:09:39.864433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.276 [2024-12-16 20:09:39.864481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.276 [2024-12-16 20:09:39.864488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.276 [2024-12-16 20:09:39.864496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.276 [2024-12-16 20:09:39.864600] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 186.006 ms, result 0 00:16:33.219 20:09:40 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:33.219 20:09:40 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.219 [2024-12-16 20:09:40.555971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:33.219 [2024-12-16 20:09:40.556090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72128 ] 00:16:33.219 [2024-12-16 20:09:40.705428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.219 [2024-12-16 20:09:40.855533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.480 [2024-12-16 20:09:41.060309] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.480 [2024-12-16 20:09:41.060359] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.744 [2024-12-16 20:09:41.201026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.201064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.744 [2024-12-16 20:09:41.201074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.744 [2024-12-16 20:09:41.201080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.203144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.203176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.744 [2024-12-16 20:09:41.203184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:16:33.744 [2024-12-16 20:09:41.203189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.203244] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.744 [2024-12-16 20:09:41.203825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.744 [2024-12-16 20:09:41.203847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.203853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.744 [2024-12-16 20:09:41.203859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:16:33.744 [2024-12-16 20:09:41.203865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.204834] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.744 [2024-12-16 20:09:41.214516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.214544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.744 [2024-12-16 20:09:41.214553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.683 ms 00:16:33.744 [2024-12-16 20:09:41.214558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.214623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.214632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.744 [2024-12-16 20:09:41.214639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:33.744 [2024-12-16 20:09:41.214644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.219008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.219033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.744 [2024-12-16 20:09:41.219040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:16:33.744 [2024-12-16 20:09:41.219048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.219128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.219136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.744 [2024-12-16 20:09:41.219142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:33.744 [2024-12-16 20:09:41.219147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.219163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.219169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.744 [2024-12-16 20:09:41.219175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.744 [2024-12-16 20:09:41.219181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.219204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.744 [2024-12-16 20:09:41.221955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.221977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.744 [2024-12-16 20:09:41.221985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:16:33.744 [2024-12-16 20:09:41.221992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.222022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.222029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.744 [2024-12-16 20:09:41.222034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:33.744 [2024-12-16 20:09:41.222040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.222053] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.744 [2024-12-16 20:09:41.222067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:33.744 [2024-12-16 20:09:41.222092] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.744 [2024-12-16 20:09:41.222105] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:33.744 [2024-12-16 20:09:41.222160] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:33.744 [2024-12-16 20:09:41.222168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.744 [2024-12-16 20:09:41.222175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:33.744 [2024-12-16 20:09:41.222183] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.744 [2024-12-16 20:09:41.222189] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.744 [2024-12-16 20:09:41.222195] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.744 [2024-12-16 20:09:41.222200] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.744 [2024-12-16 20:09:41.222206] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:33.744 [2024-12-16 20:09:41.222214] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:33.744 [2024-12-16 20:09:41.222220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.222226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.744 [2024-12-16 20:09:41.222231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:16:33.744 [2024-12-16 20:09:41.222236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.222286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.744 [2024-12-16 20:09:41.222292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.744 [2024-12-16 20:09:41.222311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:33.744 [2024-12-16 20:09:41.222317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.744 [2024-12-16 20:09:41.222373] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.744 [2024-12-16 20:09:41.222380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.744 [2024-12-16 20:09:41.222389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.744 [2024-12-16 20:09:41.222395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.744 [2024-12-16 20:09:41.222401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.744 [2024-12-16 20:09:41.222406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.744 [2024-12-16 20:09:41.222411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.744 [2024-12-16 20:09:41.222416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.744 [2024-12-16 20:09:41.222421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.744 [2024-12-16 20:09:41.222425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.744 [2024-12-16 20:09:41.222430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.744 [2024-12-16 20:09:41.222435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.744 [2024-12-16 20:09:41.222440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.744 [2024-12-16 20:09:41.222445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.744 [2024-12-16 20:09:41.222454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:33.744 [2024-12-16 20:09:41.222460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.744 [2024-12-16 20:09:41.222466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.744 [2024-12-16 20:09:41.222471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:33.744 [2024-12-16 20:09:41.222475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.744 [2024-12-16 20:09:41.222480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:33.744 [2024-12-16 20:09:41.222485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:33.744 [2024-12-16 20:09:41.222490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:33.744 [2024-12-16 20:09:41.222495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.745 [2024-12-16 20:09:41.222499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.745 [2024-12-16 20:09:41.222504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.745 [2024-12-16 20:09:41.222508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.745 [2024-12-16 20:09:41.222513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:33.745 [2024-12-16 20:09:41.222518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.745 [2024-12-16 20:09:41.222522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.745 [2024-12-16 20:09:41.222527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.745 [2024-12-16 20:09:41.222532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.745 [2024-12-16 20:09:41.222537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.745 [2024-12-16 20:09:41.222541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:33.745 [2024-12-16 20:09:41.222546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.745 [2024-12-16 20:09:41.222551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.745 [2024-12-16 20:09:41.222555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.745 [2024-12-16 20:09:41.222560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.745 [2024-12-16 20:09:41.222565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.745 [2024-12-16 20:09:41.222570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:33.745 [2024-12-16 20:09:41.222575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.745 [2024-12-16 20:09:41.222579] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.745 [2024-12-16 20:09:41.222585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.745 [2024-12-16 20:09:41.222590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.745 [2024-12-16 20:09:41.222598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.745 [2024-12-16 20:09:41.222604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.745 [2024-12-16 20:09:41.222609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.745 [2024-12-16 20:09:41.222614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.745 [2024-12-16 20:09:41.222621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.745 [2024-12-16 20:09:41.222626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.745 [2024-12-16 20:09:41.222630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.745 [2024-12-16 20:09:41.222636] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.745 [2024-12-16 20:09:41.222644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.745 [2024-12-16 20:09:41.222650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.745 [2024-12-16 20:09:41.222655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:33.745 [2024-12-16 20:09:41.222661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.745 [2024-12-16 20:09:41.222666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:33.745 [2024-12-16 20:09:41.222671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:33.745 [2024-12-16 20:09:41.222676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:33.745 [2024-12-16 20:09:41.222682] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:33.745 [2024-12-16 20:09:41.222687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:33.745 [2024-12-16 20:09:41.222692] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:33.745 [2024-12-16 20:09:41.222697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:33.745 [2024-12-16 20:09:41.222710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:33.745 [2024-12-16 20:09:41.222716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:33.745 [2024-12-16 20:09:41.222721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:33.745 [2024-12-16 20:09:41.222727] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.745 [2024-12-16 20:09:41.222734] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.745 [2024-12-16 20:09:41.222740] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.745 [2024-12-16 20:09:41.222745] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.745 [2024-12-16 20:09:41.222751] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.745 [2024-12-16 20:09:41.222756] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.745 [2024-12-16 20:09:41.222762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.222767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.745 [2024-12-16 20:09:41.222772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:16:33.745 [2024-12-16 20:09:41.222778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.234634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.234750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.745 [2024-12-16 20:09:41.234762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.824 ms 00:16:33.745 [2024-12-16 20:09:41.234768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.234856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.234863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.745 [2024-12-16 20:09:41.234869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:33.745 [2024-12-16 20:09:41.234875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.270416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.270530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.745 [2024-12-16 20:09:41.270544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.526 ms 00:16:33.745 [2024-12-16 20:09:41.270551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.270607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.270615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.745 [2024-12-16 20:09:41.270626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:33.745 [2024-12-16 20:09:41.270631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.270922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.270934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.745 [2024-12-16 20:09:41.270941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:33.745 [2024-12-16 20:09:41.270947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.271041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.271048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.745 [2024-12-16 20:09:41.271054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:33.745 [2024-12-16 20:09:41.271060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.282435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.282460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.745 [2024-12-16 20:09:41.282468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.358 ms 00:16:33.745 [2024-12-16 20:09:41.282475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.292212] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:33.745 [2024-12-16 20:09:41.292239] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.745 [2024-12-16 20:09:41.292248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.292255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.745 [2024-12-16 20:09:41.292261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.698 ms 00:16:33.745 [2024-12-16 20:09:41.292267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.310802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.310833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.745 [2024-12-16 20:09:41.310841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.478 ms 00:16:33.745 [2024-12-16 20:09:41.310848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.319506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.319531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.745 [2024-12-16 20:09:41.319543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.606 ms 00:16:33.745 [2024-12-16 20:09:41.319548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.328450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.328474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.745 [2024-12-16 20:09:41.328481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.862 ms 00:16:33.745 [2024-12-16 20:09:41.328486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.328758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.745 [2024-12-16 20:09:41.328771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.745 [2024-12-16 20:09:41.328778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:33.745 [2024-12-16 20:09:41.328785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.745 [2024-12-16 20:09:41.373963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.746 [2024-12-16 20:09:41.374097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.746 [2024-12-16 20:09:41.374111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.161 ms 00:16:33.746 [2024-12-16 20:09:41.374121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.746 [2024-12-16 20:09:41.381975] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:34.007 [2024-12-16 20:09:41.393324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.393351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:34.008 [2024-12-16 20:09:41.393360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.147 ms 00:16:34.008 [2024-12-16 20:09:41.393367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.393421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.393428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:34.008 [2024-12-16 20:09:41.393437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:34.008 [2024-12-16 20:09:41.393443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.393482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.393489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:34.008 [2024-12-16 20:09:41.393495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:34.008 [2024-12-16 20:09:41.393500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.394420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.394441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:34.008 [2024-12-16 20:09:41.394447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:16:34.008 [2024-12-16 20:09:41.394453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.394476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.394484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:34.008 [2024-12-16 20:09:41.394490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.008 [2024-12-16 20:09:41.394495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.394520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:34.008 [2024-12-16 20:09:41.394527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.394533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:34.008 [2024-12-16 20:09:41.394539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:34.008 [2024-12-16 20:09:41.394544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.412572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.412672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:34.008 [2024-12-16 20:09:41.412685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.012 ms 00:16:34.008 [2024-12-16 20:09:41.412691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.412754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.008 [2024-12-16 20:09:41.412762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:34.008 [2024-12-16 20:09:41.412769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:34.008 [2024-12-16 20:09:41.412774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.008 [2024-12-16 20:09:41.413479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.008 [2024-12-16 20:09:41.415899] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 212.226 ms, result 0 00:16:34.008 [2024-12-16 20:09:41.416525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:34.008 [2024-12-16 20:09:41.431533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.953  [2024-12-16T20:09:43.538Z] Copying: 22/256 [MB] (22 MBps) [2024-12-16T20:09:44.482Z] Copying: 33/256 [MB] (10 MBps) [2024-12-16T20:09:45.870Z] Copying: 48/256 [MB] (15 MBps) [2024-12-16T20:09:46.478Z] Copying: 58/256 [MB] (10 MBps) [2024-12-16T20:09:47.479Z] Copying: 69/256 [MB] (10 MBps) [2024-12-16T20:09:48.866Z] Copying: 81/256 [MB] (12 MBps) [2024-12-16T20:09:49.439Z] Copying: 91/256 [MB] (10 MBps) [2024-12-16T20:09:50.830Z] Copying: 102/256 [MB] (10 MBps) [2024-12-16T20:09:51.773Z] Copying: 112/256 [MB] (10 MBps) [2024-12-16T20:09:52.750Z] Copying: 122/256 [MB] (10 MBps) [2024-12-16T20:09:53.693Z] Copying: 132/256 [MB] (10 MBps) [2024-12-16T20:09:54.635Z] Copying: 142/256 [MB] (10 MBps) [2024-12-16T20:09:55.578Z] Copying: 153/256 [MB] (10 MBps) [2024-12-16T20:09:56.522Z] Copying: 163/256 [MB] (10 MBps) [2024-12-16T20:09:57.465Z] Copying: 173/256 [MB] (10 MBps) [2024-12-16T20:09:58.851Z] Copying: 190/256 [MB] (17 MBps) [2024-12-16T20:09:59.795Z] Copying: 202/256 [MB] (11 MBps) [2024-12-16T20:10:00.739Z] Copying: 213/256 [MB] (10 MBps) [2024-12-16T20:10:01.683Z] Copying: 225/256 [MB] (12 MBps) [2024-12-16T20:10:02.625Z] Copying: 239/256 [MB] (14 MBps) [2024-12-16T20:10:03.199Z] Copying: 250/256 [MB] (10 MBps) [2024-12-16T20:10:03.199Z] Copying: 256/256 [MB] (average 11 MBps)[2024-12-16 20:10:02.992551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:55.559 [2024-12-16 20:10:03.002832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.002892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:55.559 [2024-12-16 20:10:03.002907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.559 [2024-12-16 20:10:03.002916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.002941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:55.559 [2024-12-16 20:10:03.005823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.005865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:55.559 [2024-12-16 20:10:03.005877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:16:55.559 [2024-12-16 20:10:03.005885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.006162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.006173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:55.559 [2024-12-16 20:10:03.006182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:16:55.559 [2024-12-16 20:10:03.006194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.009965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.009991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.559 [2024-12-16 20:10:03.010001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:16:55.559 [2024-12-16 20:10:03.010010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.016885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.016928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:55.559 [2024-12-16 20:10:03.016938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.841 ms 00:16:55.559 [2024-12-16 20:10:03.016946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.042842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.042890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.559 [2024-12-16 20:10:03.042903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.822 ms 00:16:55.559 [2024-12-16 20:10:03.042910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.059161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.059205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:55.559 [2024-12-16 20:10:03.059217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.187 ms 00:16:55.559 [2024-12-16 20:10:03.059225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.059417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.059440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:55.559 [2024-12-16 20:10:03.059450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:55.559 [2024-12-16 20:10:03.059458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.085169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.085373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:55.559 [2024-12-16 20:10:03.085395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.694 ms 00:16:55.559 [2024-12-16 20:10:03.085402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.110689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.110732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:55.559 [2024-12-16 20:10:03.110744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.193 ms 00:16:55.559 [2024-12-16 20:10:03.110751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.559 [2024-12-16 20:10:03.135695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.559 [2024-12-16 20:10:03.135740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:55.560 [2024-12-16 20:10:03.135751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.882 ms 00:16:55.560 [2024-12-16 20:10:03.135758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.560 [2024-12-16 20:10:03.160348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.560 [2024-12-16 20:10:03.160392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:55.560 [2024-12-16 20:10:03.160404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.497 ms 00:16:55.560 [2024-12-16 20:10:03.160411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.560 [2024-12-16 20:10:03.160471] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:55.560 [2024-12-16 20:10:03.160488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.160999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:55.560 [2024-12-16 20:10:03.161134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:55.561 [2024-12-16 20:10:03.161257] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:55.561 [2024-12-16 20:10:03.161264] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:16:55.561 [2024-12-16 20:10:03.161273] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:55.561 [2024-12-16 20:10:03.161280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:55.561 [2024-12-16 20:10:03.161286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:55.561 [2024-12-16 20:10:03.161294] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:55.561 [2024-12-16 20:10:03.161325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:55.561 [2024-12-16 20:10:03.161337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:55.561 [2024-12-16 20:10:03.161345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:55.561 [2024-12-16 20:10:03.161351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:55.561 [2024-12-16 20:10:03.161358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:55.561 [2024-12-16 20:10:03.161366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.561 [2024-12-16 20:10:03.161373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:55.561 [2024-12-16 20:10:03.161383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:16:55.561 [2024-12-16 20:10:03.161390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.561 [2024-12-16 20:10:03.175097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.561 [2024-12-16 20:10:03.175137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:55.561 [2024-12-16 20:10:03.175155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.673 ms 00:16:55.561 [2024-12-16 20:10:03.175163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.561 [2024-12-16 20:10:03.175451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.561 [2024-12-16 20:10:03.175463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:55.561 [2024-12-16 20:10:03.175473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:16:55.561 [2024-12-16 20:10:03.175480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.822 [2024-12-16 20:10:03.217294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.217348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.823 [2024-12-16 20:10:03.217365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.217373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.217471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.217481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.823 [2024-12-16 20:10:03.217489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.217497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.217548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.217558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.823 [2024-12-16 20:10:03.217566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.217579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.217597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.217605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.823 [2024-12-16 20:10:03.217613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.217620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.297521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.297572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.823 [2024-12-16 20:10:03.297590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.297598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.329506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.329551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.823 [2024-12-16 20:10:03.329562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.329571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.329634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.329644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.823 [2024-12-16 20:10:03.329653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.329661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.329701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.329710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.823 [2024-12-16 20:10:03.329718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.329726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.329830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.329840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.823 [2024-12-16 20:10:03.329849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.329856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.329896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.329906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:55.823 [2024-12-16 20:10:03.329914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.329923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.329966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.329976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.823 [2024-12-16 20:10:03.329985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.329993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.330046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.823 [2024-12-16 20:10:03.330060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.823 [2024-12-16 20:10:03.330069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.823 [2024-12-16 20:10:03.330077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.823 [2024-12-16 20:10:03.330238] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.414 ms, result 0 00:16:56.766 00:16:56.766 00:16:56.766 20:10:04 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:56.766 20:10:04 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:57.338 20:10:04 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.338 [2024-12-16 20:10:04.862341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.338 [2024-12-16 20:10:04.862939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:16:57.602 [2024-12-16 20:10:05.016588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.602 [2024-12-16 20:10:05.237170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.180 [2024-12-16 20:10:05.522395] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.180 [2024-12-16 20:10:05.522478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.180 [2024-12-16 20:10:05.677944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.678006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.180 [2024-12-16 20:10:05.678021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.180 [2024-12-16 20:10:05.678030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.681076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.681290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.180 [2024-12-16 20:10:05.681325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:16:58.180 [2024-12-16 20:10:05.681334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.681733] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.180 [2024-12-16 20:10:05.682579] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.180 [2024-12-16 20:10:05.682619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.682629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.180 [2024-12-16 20:10:05.682640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:16:58.180 [2024-12-16 20:10:05.682649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.684365] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.180 [2024-12-16 20:10:05.698817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.698863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.180 [2024-12-16 20:10:05.698876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.455 ms 00:16:58.180 [2024-12-16 20:10:05.698885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.699004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.699016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.180 [2024-12-16 20:10:05.699026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:58.180 [2024-12-16 20:10:05.699034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.707081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.707251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.180 [2024-12-16 20:10:05.707269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.000 ms 00:16:58.180 [2024-12-16 20:10:05.707284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.707423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.707447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.180 [2024-12-16 20:10:05.707457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:58.180 [2024-12-16 20:10:05.707465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.707495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.707505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.180 [2024-12-16 20:10:05.707513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:58.180 [2024-12-16 20:10:05.707521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.707554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.180 [2024-12-16 20:10:05.711709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.711746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.180 [2024-12-16 20:10:05.711756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:16:58.180 [2024-12-16 20:10:05.711768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.711847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.711857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.180 [2024-12-16 20:10:05.711866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:58.180 [2024-12-16 20:10:05.711875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.711894] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.180 [2024-12-16 20:10:05.711916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:58.180 [2024-12-16 20:10:05.711951] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.180 [2024-12-16 20:10:05.711969] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:58.180 [2024-12-16 20:10:05.712045] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:58.180 [2024-12-16 20:10:05.712055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.180 [2024-12-16 20:10:05.712065] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:58.180 [2024-12-16 20:10:05.712075] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712084] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712092] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.180 [2024-12-16 20:10:05.712099] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.180 [2024-12-16 20:10:05.712106] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:58.180 [2024-12-16 20:10:05.712117] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:58.180 [2024-12-16 20:10:05.712125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.712134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.180 [2024-12-16 20:10:05.712142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:16:58.180 [2024-12-16 20:10:05.712149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.712215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.180 [2024-12-16 20:10:05.712225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.180 [2024-12-16 20:10:05.712232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:58.180 [2024-12-16 20:10:05.712240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.180 [2024-12-16 20:10:05.712338] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.180 [2024-12-16 20:10:05.712350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.180 [2024-12-16 20:10:05.712359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.180 [2024-12-16 20:10:05.712382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.180 [2024-12-16 20:10:05.712405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.180 [2024-12-16 20:10:05.712420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.180 [2024-12-16 20:10:05.712427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.180 [2024-12-16 20:10:05.712433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.180 [2024-12-16 20:10:05.712444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.180 [2024-12-16 20:10:05.712458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:58.180 [2024-12-16 20:10:05.712465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.180 [2024-12-16 20:10:05.712479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:58.180 [2024-12-16 20:10:05.712485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:58.180 [2024-12-16 20:10:05.712499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:58.180 [2024-12-16 20:10:05.712506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.180 [2024-12-16 20:10:05.712519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.180 [2024-12-16 20:10:05.712542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712554] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.180 [2024-12-16 20:10:05.712560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.180 [2024-12-16 20:10:05.712580] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:58.180 [2024-12-16 20:10:05.712587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.180 [2024-12-16 20:10:05.712593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.180 [2024-12-16 20:10:05.712600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.181 [2024-12-16 20:10:05.712607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.181 [2024-12-16 20:10:05.712613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.181 [2024-12-16 20:10:05.712620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:58.181 [2024-12-16 20:10:05.712627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.181 [2024-12-16 20:10:05.712633] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.181 [2024-12-16 20:10:05.712641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.181 [2024-12-16 20:10:05.712649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.181 [2024-12-16 20:10:05.712659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.181 [2024-12-16 20:10:05.712666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.181 [2024-12-16 20:10:05.712674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.181 [2024-12-16 20:10:05.712683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.181 [2024-12-16 20:10:05.712691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.181 [2024-12-16 20:10:05.712697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.181 [2024-12-16 20:10:05.712704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.181 [2024-12-16 20:10:05.712711] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.181 [2024-12-16 20:10:05.712721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.181 [2024-12-16 20:10:05.712730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.181 [2024-12-16 20:10:05.712737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:58.181 [2024-12-16 20:10:05.712744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.181 [2024-12-16 20:10:05.712751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:58.181 [2024-12-16 20:10:05.712757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:58.181 [2024-12-16 20:10:05.712764] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:58.181 [2024-12-16 20:10:05.712772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:58.181 [2024-12-16 20:10:05.712779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:58.181 [2024-12-16 20:10:05.712786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:58.181 [2024-12-16 20:10:05.712793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:58.181 [2024-12-16 20:10:05.712800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:58.181 [2024-12-16 20:10:05.712807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:58.181 [2024-12-16 20:10:05.712815] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:58.181 [2024-12-16 20:10:05.712822] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.181 [2024-12-16 20:10:05.712836] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.181 [2024-12-16 20:10:05.712845] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.181 [2024-12-16 20:10:05.712852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.181 [2024-12-16 20:10:05.712860] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.181 [2024-12-16 20:10:05.712868] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.181 [2024-12-16 20:10:05.712875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.712883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.181 [2024-12-16 20:10:05.712890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:16:58.181 [2024-12-16 20:10:05.712898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.730904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.730953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.181 [2024-12-16 20:10:05.730965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.961 ms 00:16:58.181 [2024-12-16 20:10:05.730975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.731105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.731116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.181 [2024-12-16 20:10:05.731126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:58.181 [2024-12-16 20:10:05.731135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.774751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.774802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.181 [2024-12-16 20:10:05.774815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.593 ms 00:16:58.181 [2024-12-16 20:10:05.774825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.774909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.774920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.181 [2024-12-16 20:10:05.774934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.181 [2024-12-16 20:10:05.774942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.775524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.775546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.181 [2024-12-16 20:10:05.775556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:16:58.181 [2024-12-16 20:10:05.775564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.775709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.775720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.181 [2024-12-16 20:10:05.775728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:16:58.181 [2024-12-16 20:10:05.775737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.792667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.792711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.181 [2024-12-16 20:10:05.792723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.903 ms 00:16:58.181 [2024-12-16 20:10:05.792734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.181 [2024-12-16 20:10:05.807200] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:58.181 [2024-12-16 20:10:05.807247] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.181 [2024-12-16 20:10:05.807260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.181 [2024-12-16 20:10:05.807268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.181 [2024-12-16 20:10:05.807278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.412 ms 00:16:58.181 [2024-12-16 20:10:05.807285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.833964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.834157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.478 [2024-12-16 20:10:05.834179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.571 ms 00:16:58.478 [2024-12-16 20:10:05.834188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.846828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.846873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.478 [2024-12-16 20:10:05.846894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.556 ms 00:16:58.478 [2024-12-16 20:10:05.846902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.859329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.859371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.478 [2024-12-16 20:10:05.859383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.345 ms 00:16:58.478 [2024-12-16 20:10:05.859391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.859805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.859820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.478 [2024-12-16 20:10:05.859829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:16:58.478 [2024-12-16 20:10:05.859840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.925557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.925620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.478 [2024-12-16 20:10:05.925636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.692 ms 00:16:58.478 [2024-12-16 20:10:05.925652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.936877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:58.478 [2024-12-16 20:10:05.955956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.956005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.478 [2024-12-16 20:10:05.956018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.200 ms 00:16:58.478 [2024-12-16 20:10:05.956027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.956111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.956121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.478 [2024-12-16 20:10:05.956134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:58.478 [2024-12-16 20:10:05.956143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.956202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.956211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.478 [2024-12-16 20:10:05.956220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:58.478 [2024-12-16 20:10:05.956228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.957631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.957674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:58.478 [2024-12-16 20:10:05.957684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:16:58.478 [2024-12-16 20:10:05.957691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.957729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.957742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.478 [2024-12-16 20:10:05.957751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.478 [2024-12-16 20:10:05.957760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.957797] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.478 [2024-12-16 20:10:05.957808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.957816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.478 [2024-12-16 20:10:05.957824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:58.478 [2024-12-16 20:10:05.957831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.984000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.984046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.478 [2024-12-16 20:10:05.984060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.141 ms 00:16:58.478 [2024-12-16 20:10:05.984069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.984177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.478 [2024-12-16 20:10:05.984189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.478 [2024-12-16 20:10:05.984198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:58.478 [2024-12-16 20:10:05.984207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.478 [2024-12-16 20:10:05.986168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.478 [2024-12-16 20:10:05.989752] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.894 ms, result 0 00:16:58.478 [2024-12-16 20:10:05.990864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.478 [2024-12-16 20:10:06.004942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.739  [2024-12-16T20:10:06.380Z] Copying: 4096/4096 [kB] (average 17 MBps)[2024-12-16 20:10:06.240338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.740 [2024-12-16 20:10:06.249608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.249665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:58.740 [2024-12-16 20:10:06.249678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.740 [2024-12-16 20:10:06.249686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.249711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:58.740 [2024-12-16 20:10:06.252561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.252600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:58.740 [2024-12-16 20:10:06.252612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:16:58.740 [2024-12-16 20:10:06.252620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.255459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.255501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:58.740 [2024-12-16 20:10:06.255512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:16:58.740 [2024-12-16 20:10:06.255526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.260100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.260135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:58.740 [2024-12-16 20:10:06.260146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.557 ms 00:16:58.740 [2024-12-16 20:10:06.260153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.267053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.267091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:58.740 [2024-12-16 20:10:06.267103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:16:58.740 [2024-12-16 20:10:06.267119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.292551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.292596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:58.740 [2024-12-16 20:10:06.292608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.374 ms 00:16:58.740 [2024-12-16 20:10:06.292615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.308405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.308449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:58.740 [2024-12-16 20:10:06.308462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.713 ms 00:16:58.740 [2024-12-16 20:10:06.308470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.308634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.308646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:58.740 [2024-12-16 20:10:06.308655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:58.740 [2024-12-16 20:10:06.308664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.334505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.334548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:58.740 [2024-12-16 20:10:06.334560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.824 ms 00:16:58.740 [2024-12-16 20:10:06.334567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.740 [2024-12-16 20:10:06.360107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.740 [2024-12-16 20:10:06.360149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:58.740 [2024-12-16 20:10:06.360161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.467 ms 00:16:58.740 [2024-12-16 20:10:06.360168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.004 [2024-12-16 20:10:06.384678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.004 [2024-12-16 20:10:06.384719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:59.004 [2024-12-16 20:10:06.384730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.436 ms 00:16:59.004 [2024-12-16 20:10:06.384737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.004 [2024-12-16 20:10:06.409533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.004 [2024-12-16 20:10:06.409729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:59.004 [2024-12-16 20:10:06.409750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.705 ms 00:16:59.004 [2024-12-16 20:10:06.409757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.004 [2024-12-16 20:10:06.409858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:59.004 [2024-12-16 20:10:06.409876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.409999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:59.004 [2024-12-16 20:10:06.410493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:59.005 [2024-12-16 20:10:06.410685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:59.005 [2024-12-16 20:10:06.410693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:16:59.005 [2024-12-16 20:10:06.410702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:59.005 [2024-12-16 20:10:06.410709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:59.005 [2024-12-16 20:10:06.410716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:59.005 [2024-12-16 20:10:06.410724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:59.005 [2024-12-16 20:10:06.410735] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:59.005 [2024-12-16 20:10:06.410744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:59.005 [2024-12-16 20:10:06.410751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:59.005 [2024-12-16 20:10:06.410757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:59.005 [2024-12-16 20:10:06.410763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:59.005 [2024-12-16 20:10:06.410771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.005 [2024-12-16 20:10:06.410779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:59.005 [2024-12-16 20:10:06.410788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:16:59.005 [2024-12-16 20:10:06.410795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.424240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.005 [2024-12-16 20:10:06.424284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:59.005 [2024-12-16 20:10:06.424342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:16:59.005 [2024-12-16 20:10:06.424350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.424604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.005 [2024-12-16 20:10:06.424615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:59.005 [2024-12-16 20:10:06.424624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:16:59.005 [2024-12-16 20:10:06.424632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.466120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.466168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.005 [2024-12-16 20:10:06.466186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.466194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.466281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.466291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.005 [2024-12-16 20:10:06.466323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.466332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.466381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.466391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.005 [2024-12-16 20:10:06.466399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.466412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.466430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.466438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.005 [2024-12-16 20:10:06.466445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.466453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.546611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.546664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.005 [2024-12-16 20:10:06.546683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.546691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.578809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.578860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.005 [2024-12-16 20:10:06.578872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.578881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.578946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.578956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.005 [2024-12-16 20:10:06.578965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.578973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.579011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.579021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.005 [2024-12-16 20:10:06.579029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.579039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.579143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.579153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.005 [2024-12-16 20:10:06.579162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.579170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.579212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.579222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:59.005 [2024-12-16 20:10:06.579232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.005 [2024-12-16 20:10:06.579240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.005 [2024-12-16 20:10:06.579286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.005 [2024-12-16 20:10:06.579295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.005 [2024-12-16 20:10:06.579342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.006 [2024-12-16 20:10:06.579351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.006 [2024-12-16 20:10:06.579405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.006 [2024-12-16 20:10:06.579419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.006 [2024-12-16 20:10:06.579427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.006 [2024-12-16 20:10:06.579459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.006 [2024-12-16 20:10:06.579638] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.027 ms, result 0 00:16:59.949 00:16:59.949 00:16:59.949 20:10:07 -- ftl/trim.sh@93 -- # svcpid=72427 00:16:59.949 20:10:07 -- ftl/trim.sh@94 -- # waitforlisten 72427 00:16:59.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.949 20:10:07 -- common/autotest_common.sh@829 -- # '[' -z 72427 ']' 00:16:59.949 20:10:07 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:59.949 20:10:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.949 20:10:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.949 20:10:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.949 20:10:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.949 20:10:07 -- common/autotest_common.sh@10 -- # set +x 00:16:59.949 [2024-12-16 20:10:07.588542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:00.210 [2024-12-16 20:10:07.588945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72427 ] 00:17:00.210 [2024-12-16 20:10:07.741700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.470 [2024-12-16 20:10:07.959231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:00.470 [2024-12-16 20:10:07.959756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.857 20:10:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.857 20:10:09 -- common/autotest_common.sh@862 -- # return 0 00:17:01.857 20:10:09 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:01.857 [2024-12-16 20:10:09.311553] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.857 [2024-12-16 20:10:09.311626] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.857 [2024-12-16 20:10:09.475649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.857 [2024-12-16 20:10:09.475794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.857 [2024-12-16 20:10:09.475814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.857 [2024-12-16 20:10:09.475822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.857 [2024-12-16 20:10:09.477867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.857 [2024-12-16 20:10:09.477898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.857 [2024-12-16 20:10:09.477907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:17:01.857 [2024-12-16 20:10:09.477913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.857 [2024-12-16 20:10:09.477972] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.857 [2024-12-16 20:10:09.478572] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.857 [2024-12-16 20:10:09.478594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.857 [2024-12-16 20:10:09.478601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.858 [2024-12-16 20:10:09.478609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:17:01.858 [2024-12-16 20:10:09.478615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.858 [2024-12-16 20:10:09.479732] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.858 [2024-12-16 20:10:09.489443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.858 [2024-12-16 20:10:09.489562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.858 [2024-12-16 20:10:09.489576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.714 ms 00:17:01.858 [2024-12-16 20:10:09.489584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.858 [2024-12-16 20:10:09.489644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.858 [2024-12-16 20:10:09.489654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.858 [2024-12-16 20:10:09.489661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:01.858 [2024-12-16 20:10:09.489667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.858 [2024-12-16 20:10:09.494095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.858 [2024-12-16 20:10:09.494124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.858 [2024-12-16 20:10:09.494132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 00:17:01.858 [2024-12-16 20:10:09.494139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.858 [2024-12-16 20:10:09.494207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.858 [2024-12-16 20:10:09.494215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.858 [2024-12-16 20:10:09.494221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:01.858 [2024-12-16 20:10:09.494228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.858 [2024-12-16 20:10:09.494247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.858 [2024-12-16 20:10:09.494255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.858 [2024-12-16 20:10:09.494260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:01.858 [2024-12-16 20:10:09.494268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.858 [2024-12-16 20:10:09.494289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.858 [2024-12-16 20:10:09.497047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.120 [2024-12-16 20:10:09.497147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.120 [2024-12-16 20:10:09.497162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:17:02.120 [2024-12-16 20:10:09.497168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.120 [2024-12-16 20:10:09.497199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.120 [2024-12-16 20:10:09.497205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:02.120 [2024-12-16 20:10:09.497213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:02.120 [2024-12-16 20:10:09.497220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.120 [2024-12-16 20:10:09.497237] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:02.120 [2024-12-16 20:10:09.497250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:02.120 [2024-12-16 20:10:09.497276] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:02.120 [2024-12-16 20:10:09.497287] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:02.120 [2024-12-16 20:10:09.497359] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:02.120 [2024-12-16 20:10:09.497367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:02.120 [2024-12-16 20:10:09.497379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:02.120 [2024-12-16 20:10:09.497387] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:02.120 [2024-12-16 20:10:09.497395] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:02.120 [2024-12-16 20:10:09.497401] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:02.120 [2024-12-16 20:10:09.497407] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:02.120 [2024-12-16 20:10:09.497413] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:02.120 [2024-12-16 20:10:09.497422] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:02.120 [2024-12-16 20:10:09.497428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.120 [2024-12-16 20:10:09.497434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:02.120 [2024-12-16 20:10:09.497440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:17:02.120 [2024-12-16 20:10:09.497446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.120 [2024-12-16 20:10:09.497495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.120 [2024-12-16 20:10:09.497502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:02.120 [2024-12-16 20:10:09.497508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:02.120 [2024-12-16 20:10:09.497514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.120 [2024-12-16 20:10:09.497570] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:02.120 [2024-12-16 20:10:09.497579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:02.120 [2024-12-16 20:10:09.497584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:02.121 [2024-12-16 20:10:09.497603] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:02.121 [2024-12-16 20:10:09.497623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.121 [2024-12-16 20:10:09.497634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:02.121 [2024-12-16 20:10:09.497641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:02.121 [2024-12-16 20:10:09.497646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.121 [2024-12-16 20:10:09.497652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:02.121 [2024-12-16 20:10:09.497657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:02.121 [2024-12-16 20:10:09.497664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:02.121 [2024-12-16 20:10:09.497675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:02.121 [2024-12-16 20:10:09.497680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:02.121 [2024-12-16 20:10:09.497691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:02.121 [2024-12-16 20:10:09.497698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:02.121 [2024-12-16 20:10:09.497710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:02.121 [2024-12-16 20:10:09.497731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:02.121 [2024-12-16 20:10:09.497749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497761] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:02.121 [2024-12-16 20:10:09.497766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:02.121 [2024-12-16 20:10:09.497783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.121 [2024-12-16 20:10:09.497794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:02.121 [2024-12-16 20:10:09.497799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:02.121 [2024-12-16 20:10:09.497806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.121 [2024-12-16 20:10:09.497811] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:02.121 [2024-12-16 20:10:09.497819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:02.121 [2024-12-16 20:10:09.497824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.121 [2024-12-16 20:10:09.497837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:02.121 [2024-12-16 20:10:09.497843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:02.121 [2024-12-16 20:10:09.497848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:02.121 [2024-12-16 20:10:09.497856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:02.121 [2024-12-16 20:10:09.497861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:02.121 [2024-12-16 20:10:09.497867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:02.121 [2024-12-16 20:10:09.497872] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:02.121 [2024-12-16 20:10:09.497881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.121 [2024-12-16 20:10:09.497887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:02.121 [2024-12-16 20:10:09.497893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:02.121 [2024-12-16 20:10:09.497898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:02.121 [2024-12-16 20:10:09.497907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:02.121 [2024-12-16 20:10:09.497913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:02.121 [2024-12-16 20:10:09.497919] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:02.121 [2024-12-16 20:10:09.497924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:02.121 [2024-12-16 20:10:09.497930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:02.121 [2024-12-16 20:10:09.497935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:02.121 [2024-12-16 20:10:09.497942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:02.121 [2024-12-16 20:10:09.497948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:02.121 [2024-12-16 20:10:09.497954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:02.121 [2024-12-16 20:10:09.497960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:02.121 [2024-12-16 20:10:09.497966] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:02.121 [2024-12-16 20:10:09.497972] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.121 [2024-12-16 20:10:09.497979] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:02.121 [2024-12-16 20:10:09.497985] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:02.121 [2024-12-16 20:10:09.497991] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:02.121 [2024-12-16 20:10:09.497997] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:02.121 [2024-12-16 20:10:09.498005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.498011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:02.121 [2024-12-16 20:10:09.498018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:17:02.121 [2024-12-16 20:10:09.498023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.509970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.509997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.121 [2024-12-16 20:10:09.510008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.910 ms 00:17:02.121 [2024-12-16 20:10:09.510017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.510107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.510114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.121 [2024-12-16 20:10:09.510122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:02.121 [2024-12-16 20:10:09.510128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.534612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.534636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.121 [2024-12-16 20:10:09.534645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.468 ms 00:17:02.121 [2024-12-16 20:10:09.534653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.534697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.534707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.121 [2024-12-16 20:10:09.534715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:02.121 [2024-12-16 20:10:09.534722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.535000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.535011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.121 [2024-12-16 20:10:09.535021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:02.121 [2024-12-16 20:10:09.535026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.535117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.535123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.121 [2024-12-16 20:10:09.535133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:02.121 [2024-12-16 20:10:09.535138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.121 [2024-12-16 20:10:09.547261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.121 [2024-12-16 20:10:09.547284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.122 [2024-12-16 20:10:09.547296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.106 ms 00:17:02.122 [2024-12-16 20:10:09.547315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.557363] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:02.122 [2024-12-16 20:10:09.557392] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:02.122 [2024-12-16 20:10:09.557403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.557410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:02.122 [2024-12-16 20:10:09.557419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.010 ms 00:17:02.122 [2024-12-16 20:10:09.557424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.576316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.576345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:02.122 [2024-12-16 20:10:09.576357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.798 ms 00:17:02.122 [2024-12-16 20:10:09.576364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.585409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.585439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:02.122 [2024-12-16 20:10:09.585448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.998 ms 00:17:02.122 [2024-12-16 20:10:09.585454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.594303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.594328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:02.122 [2024-12-16 20:10:09.594339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.800 ms 00:17:02.122 [2024-12-16 20:10:09.594344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.594630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.594643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.122 [2024-12-16 20:10:09.594653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:17:02.122 [2024-12-16 20:10:09.594659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.640777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.640817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:02.122 [2024-12-16 20:10:09.640831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.098 ms 00:17:02.122 [2024-12-16 20:10:09.640837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.649228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.122 [2024-12-16 20:10:09.661211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.661245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.122 [2024-12-16 20:10:09.661255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.289 ms 00:17:02.122 [2024-12-16 20:10:09.661263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.661331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.661343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.122 [2024-12-16 20:10:09.661350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.122 [2024-12-16 20:10:09.661359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.661399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.661407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.122 [2024-12-16 20:10:09.661413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:02.122 [2024-12-16 20:10:09.661420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.662359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.662384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:02.122 [2024-12-16 20:10:09.662391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:17:02.122 [2024-12-16 20:10:09.662398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.662424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.662435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.122 [2024-12-16 20:10:09.662441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.122 [2024-12-16 20:10:09.662447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.662475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.122 [2024-12-16 20:10:09.662485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.662490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.122 [2024-12-16 20:10:09.662497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:02.122 [2024-12-16 20:10:09.662503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.680430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.680456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.122 [2024-12-16 20:10:09.680466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.907 ms 00:17:02.122 [2024-12-16 20:10:09.680472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.680541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.122 [2024-12-16 20:10:09.680549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.122 [2024-12-16 20:10:09.680557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:02.122 [2024-12-16 20:10:09.680565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.122 [2024-12-16 20:10:09.681172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.122 [2024-12-16 20:10:09.683610] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 205.309 ms, result 0 00:17:02.122 [2024-12-16 20:10:09.684678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.122 Some configs were skipped because the RPC state that can call them passed over. 00:17:02.122 20:10:09 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:02.383 [2024-12-16 20:10:09.915125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.383 [2024-12-16 20:10:09.915249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:02.383 [2024-12-16 20:10:09.915293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.409 ms 00:17:02.383 [2024-12-16 20:10:09.915322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.383 [2024-12-16 20:10:09.915364] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.646 ms, result 0 00:17:02.383 true 00:17:02.383 20:10:09 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:02.644 [2024-12-16 20:10:10.121875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.644 [2024-12-16 20:10:10.121992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:02.644 [2024-12-16 20:10:10.122036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.481 ms 00:17:02.644 [2024-12-16 20:10:10.122054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.644 [2024-12-16 20:10:10.122095] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.699 ms, result 0 00:17:02.644 true 00:17:02.644 20:10:10 -- ftl/trim.sh@102 -- # killprocess 72427 00:17:02.644 20:10:10 -- common/autotest_common.sh@936 -- # '[' -z 72427 ']' 00:17:02.644 20:10:10 -- common/autotest_common.sh@940 -- # kill -0 72427 00:17:02.644 20:10:10 -- common/autotest_common.sh@941 -- # uname 00:17:02.644 20:10:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.644 20:10:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72427 00:17:02.644 20:10:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:02.644 20:10:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:02.644 killing process with pid 72427 00:17:02.644 20:10:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72427' 00:17:02.644 20:10:10 -- common/autotest_common.sh@955 -- # kill 72427 00:17:02.644 20:10:10 -- common/autotest_common.sh@960 -- # wait 72427 00:17:03.217 [2024-12-16 20:10:10.695588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.695630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:03.217 [2024-12-16 20:10:10.695640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.217 [2024-12-16 20:10:10.695648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.695665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:03.217 [2024-12-16 20:10:10.697765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.697789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:03.217 [2024-12-16 20:10:10.697800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:17:03.217 [2024-12-16 20:10:10.697807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.698015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.698022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:03.217 [2024-12-16 20:10:10.698030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:17:03.217 [2024-12-16 20:10:10.698036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.701193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.701219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:03.217 [2024-12-16 20:10:10.701228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:17:03.217 [2024-12-16 20:10:10.701233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.706555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.706679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:03.217 [2024-12-16 20:10:10.706695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:17:03.217 [2024-12-16 20:10:10.706703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.714339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.714362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:03.217 [2024-12-16 20:10:10.714372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.593 ms 00:17:03.217 [2024-12-16 20:10:10.714377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.720387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.720413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:03.217 [2024-12-16 20:10:10.720422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:17:03.217 [2024-12-16 20:10:10.720428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.217 [2024-12-16 20:10:10.720522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.217 [2024-12-16 20:10:10.720529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:03.218 [2024-12-16 20:10:10.720537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:03.218 [2024-12-16 20:10:10.720542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.218 [2024-12-16 20:10:10.728214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.218 [2024-12-16 20:10:10.728237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:03.218 [2024-12-16 20:10:10.728246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.655 ms 00:17:03.218 [2024-12-16 20:10:10.728251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.218 [2024-12-16 20:10:10.736288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.218 [2024-12-16 20:10:10.736316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:03.218 [2024-12-16 20:10:10.736328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.008 ms 00:17:03.218 [2024-12-16 20:10:10.736333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.218 [2024-12-16 20:10:10.743336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.218 [2024-12-16 20:10:10.743359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:03.218 [2024-12-16 20:10:10.743367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:17:03.218 [2024-12-16 20:10:10.743373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.218 [2024-12-16 20:10:10.750355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.218 [2024-12-16 20:10:10.750376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:03.218 [2024-12-16 20:10:10.750384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:17:03.218 [2024-12-16 20:10:10.750390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.218 [2024-12-16 20:10:10.750417] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:03.218 [2024-12-16 20:10:10.750428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:03.218 [2024-12-16 20:10:10.750876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.750999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:03.219 [2024-12-16 20:10:10.751073] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:03.219 [2024-12-16 20:10:10.751082] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:17:03.219 [2024-12-16 20:10:10.751088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:03.219 [2024-12-16 20:10:10.751094] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:03.219 [2024-12-16 20:10:10.751099] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:03.219 [2024-12-16 20:10:10.751106] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:03.219 [2024-12-16 20:10:10.751111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:03.219 [2024-12-16 20:10:10.751118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:03.219 [2024-12-16 20:10:10.751124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:03.219 [2024-12-16 20:10:10.751130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:03.219 [2024-12-16 20:10:10.751135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:03.219 [2024-12-16 20:10:10.751141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.219 [2024-12-16 20:10:10.751147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:03.219 [2024-12-16 20:10:10.751154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:17:03.219 [2024-12-16 20:10:10.751160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.219 [2024-12-16 20:10:10.760849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.219 [2024-12-16 20:10:10.760872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:03.219 [2024-12-16 20:10:10.760884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.673 ms 00:17:03.219 [2024-12-16 20:10:10.760890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.219 [2024-12-16 20:10:10.761056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.219 [2024-12-16 20:10:10.761063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:03.219 [2024-12-16 20:10:10.761073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:03.219 [2024-12-16 20:10:10.761078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.219 [2024-12-16 20:10:10.796253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.219 [2024-12-16 20:10:10.796278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.219 [2024-12-16 20:10:10.796287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.219 [2024-12-16 20:10:10.796293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.219 [2024-12-16 20:10:10.796371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.219 [2024-12-16 20:10:10.796378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.219 [2024-12-16 20:10:10.796388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.219 [2024-12-16 20:10:10.796393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.219 [2024-12-16 20:10:10.796425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.219 [2024-12-16 20:10:10.796432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.219 [2024-12-16 20:10:10.796441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.219 [2024-12-16 20:10:10.796447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.219 [2024-12-16 20:10:10.796461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.219 [2024-12-16 20:10:10.796467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.219 [2024-12-16 20:10:10.796474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.219 [2024-12-16 20:10:10.796481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.857061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.857191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.480 [2024-12-16 20:10:10.857208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.857215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.480 [2024-12-16 20:10:10.879660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.879666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.480 [2024-12-16 20:10:10.879723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.879729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.480 [2024-12-16 20:10:10.879766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.879772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.480 [2024-12-16 20:10:10.879861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.879866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:03.480 [2024-12-16 20:10:10.879904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.879910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.480 [2024-12-16 20:10:10.879955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.879960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.879993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.480 [2024-12-16 20:10:10.879999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.480 [2024-12-16 20:10:10.880007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.480 [2024-12-16 20:10:10.880013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.480 [2024-12-16 20:10:10.880116] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 184.511 ms, result 0 00:17:04.052 20:10:11 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:04.052 [2024-12-16 20:10:11.581976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.052 [2024-12-16 20:10:11.582093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:17:04.313 [2024-12-16 20:10:11.730030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.313 [2024-12-16 20:10:11.869426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.574 [2024-12-16 20:10:12.072727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.574 [2024-12-16 20:10:12.072778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.574 [2024-12-16 20:10:12.213475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.574 [2024-12-16 20:10:12.213511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:04.574 [2024-12-16 20:10:12.213521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:04.574 [2024-12-16 20:10:12.213527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.835 [2024-12-16 20:10:12.215605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.835 [2024-12-16 20:10:12.215751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.835 [2024-12-16 20:10:12.215764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:17:04.835 [2024-12-16 20:10:12.215770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.835 [2024-12-16 20:10:12.215851] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:04.836 [2024-12-16 20:10:12.216411] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:04.836 [2024-12-16 20:10:12.216428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.216434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.836 [2024-12-16 20:10:12.216441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:17:04.836 [2024-12-16 20:10:12.216446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.217392] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:04.836 [2024-12-16 20:10:12.226880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.226994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:04.836 [2024-12-16 20:10:12.227009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.490 ms 00:17:04.836 [2024-12-16 20:10:12.227015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.227077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.227085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:04.836 [2024-12-16 20:10:12.227092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:04.836 [2024-12-16 20:10:12.227097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.231479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.231501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.836 [2024-12-16 20:10:12.231509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:17:04.836 [2024-12-16 20:10:12.231517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.231598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.231606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.836 [2024-12-16 20:10:12.231613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:04.836 [2024-12-16 20:10:12.231618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.231634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.231639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:04.836 [2024-12-16 20:10:12.231645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:04.836 [2024-12-16 20:10:12.231650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.231674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:04.836 [2024-12-16 20:10:12.234399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.234421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.836 [2024-12-16 20:10:12.234428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:17:04.836 [2024-12-16 20:10:12.234435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.234465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.234471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:04.836 [2024-12-16 20:10:12.234478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:04.836 [2024-12-16 20:10:12.234483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.234497] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:04.836 [2024-12-16 20:10:12.234510] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:04.836 [2024-12-16 20:10:12.234534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:04.836 [2024-12-16 20:10:12.234547] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:04.836 [2024-12-16 20:10:12.234604] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:04.836 [2024-12-16 20:10:12.234611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:04.836 [2024-12-16 20:10:12.234619] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:04.836 [2024-12-16 20:10:12.234626] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234632] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234638] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:04.836 [2024-12-16 20:10:12.234643] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:04.836 [2024-12-16 20:10:12.234648] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:04.836 [2024-12-16 20:10:12.234656] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:04.836 [2024-12-16 20:10:12.234661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.234667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:04.836 [2024-12-16 20:10:12.234672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:04.836 [2024-12-16 20:10:12.234678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.234728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.836 [2024-12-16 20:10:12.234735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:04.836 [2024-12-16 20:10:12.234740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:04.836 [2024-12-16 20:10:12.234746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.836 [2024-12-16 20:10:12.234801] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:04.836 [2024-12-16 20:10:12.234808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:04.836 [2024-12-16 20:10:12.234814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:04.836 [2024-12-16 20:10:12.234831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:04.836 [2024-12-16 20:10:12.234846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.836 [2024-12-16 20:10:12.234857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:04.836 [2024-12-16 20:10:12.234861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:04.836 [2024-12-16 20:10:12.234866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.836 [2024-12-16 20:10:12.234873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:04.836 [2024-12-16 20:10:12.234884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:04.836 [2024-12-16 20:10:12.234888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:04.836 [2024-12-16 20:10:12.234899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:04.836 [2024-12-16 20:10:12.234903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:04.836 [2024-12-16 20:10:12.234913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:04.836 [2024-12-16 20:10:12.234918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:04.836 [2024-12-16 20:10:12.234928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:04.836 [2024-12-16 20:10:12.234941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:04.836 [2024-12-16 20:10:12.234956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:04.836 [2024-12-16 20:10:12.234970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:04.836 [2024-12-16 20:10:12.234979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:04.836 [2024-12-16 20:10:12.234984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:04.836 [2024-12-16 20:10:12.234989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.836 [2024-12-16 20:10:12.234993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:04.836 [2024-12-16 20:10:12.234998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:04.836 [2024-12-16 20:10:12.235003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.836 [2024-12-16 20:10:12.235007] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:04.836 [2024-12-16 20:10:12.235013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:04.836 [2024-12-16 20:10:12.235018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.836 [2024-12-16 20:10:12.235026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.836 [2024-12-16 20:10:12.235031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:04.836 [2024-12-16 20:10:12.235037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:04.836 [2024-12-16 20:10:12.235041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:04.836 [2024-12-16 20:10:12.235047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:04.836 [2024-12-16 20:10:12.235051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:04.836 [2024-12-16 20:10:12.235056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:04.837 [2024-12-16 20:10:12.235062] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:04.837 [2024-12-16 20:10:12.235069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.837 [2024-12-16 20:10:12.235075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:04.837 [2024-12-16 20:10:12.235081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:04.837 [2024-12-16 20:10:12.235086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:04.837 [2024-12-16 20:10:12.235091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:04.837 [2024-12-16 20:10:12.235097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:04.837 [2024-12-16 20:10:12.235102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:04.837 [2024-12-16 20:10:12.235107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:04.837 [2024-12-16 20:10:12.235112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:04.837 [2024-12-16 20:10:12.235117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:04.837 [2024-12-16 20:10:12.235122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:04.837 [2024-12-16 20:10:12.235128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:04.837 [2024-12-16 20:10:12.235133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:04.837 [2024-12-16 20:10:12.235139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:04.837 [2024-12-16 20:10:12.235144] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:04.837 [2024-12-16 20:10:12.235153] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.837 [2024-12-16 20:10:12.235159] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:04.837 [2024-12-16 20:10:12.235164] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:04.837 [2024-12-16 20:10:12.235169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:04.837 [2024-12-16 20:10:12.235175] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:04.837 [2024-12-16 20:10:12.235180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.235185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:04.837 [2024-12-16 20:10:12.235191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:17:04.837 [2024-12-16 20:10:12.235196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.247006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.247034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.837 [2024-12-16 20:10:12.247042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.779 ms 00:17:04.837 [2024-12-16 20:10:12.247048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.247135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.247142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.837 [2024-12-16 20:10:12.247147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:04.837 [2024-12-16 20:10:12.247152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.283827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.283940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.837 [2024-12-16 20:10:12.283954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.659 ms 00:17:04.837 [2024-12-16 20:10:12.283961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.284016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.284025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.837 [2024-12-16 20:10:12.284035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.837 [2024-12-16 20:10:12.284041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.284341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.284353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.837 [2024-12-16 20:10:12.284360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:04.837 [2024-12-16 20:10:12.284366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.284459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.284466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.837 [2024-12-16 20:10:12.284472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:04.837 [2024-12-16 20:10:12.284478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.295707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.295731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.837 [2024-12-16 20:10:12.295738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.213 ms 00:17:04.837 [2024-12-16 20:10:12.295745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.305617] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:04.837 [2024-12-16 20:10:12.305643] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:04.837 [2024-12-16 20:10:12.305651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.305657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:04.837 [2024-12-16 20:10:12.305664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.832 ms 00:17:04.837 [2024-12-16 20:10:12.305669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.324064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.324094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:04.837 [2024-12-16 20:10:12.324102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.351 ms 00:17:04.837 [2024-12-16 20:10:12.324108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.332979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.333003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:04.837 [2024-12-16 20:10:12.333016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.820 ms 00:17:04.837 [2024-12-16 20:10:12.333021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.341639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.341664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:04.837 [2024-12-16 20:10:12.341671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.580 ms 00:17:04.837 [2024-12-16 20:10:12.341676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.341943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.341951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.837 [2024-12-16 20:10:12.341957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:04.837 [2024-12-16 20:10:12.341964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.387115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.387244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:04.837 [2024-12-16 20:10:12.387259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.134 ms 00:17:04.837 [2024-12-16 20:10:12.387269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.395274] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:04.837 [2024-12-16 20:10:12.406912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.406940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:04.837 [2024-12-16 20:10:12.406950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.575 ms 00:17:04.837 [2024-12-16 20:10:12.406956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.407008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.407015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:04.837 [2024-12-16 20:10:12.407024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:04.837 [2024-12-16 20:10:12.407029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.407066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.407072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:04.837 [2024-12-16 20:10:12.407079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:04.837 [2024-12-16 20:10:12.407084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.408029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.408056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:04.837 [2024-12-16 20:10:12.408063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:17:04.837 [2024-12-16 20:10:12.408069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.408095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.408104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:04.837 [2024-12-16 20:10:12.408109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:04.837 [2024-12-16 20:10:12.408115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.408140] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:04.837 [2024-12-16 20:10:12.408147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.837 [2024-12-16 20:10:12.408152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:04.837 [2024-12-16 20:10:12.408158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:04.837 [2024-12-16 20:10:12.408163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.837 [2024-12-16 20:10:12.426342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.838 [2024-12-16 20:10:12.426366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:04.838 [2024-12-16 20:10:12.426374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.163 ms 00:17:04.838 [2024-12-16 20:10:12.426380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.838 [2024-12-16 20:10:12.426445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.838 [2024-12-16 20:10:12.426453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:04.838 [2024-12-16 20:10:12.426459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:04.838 [2024-12-16 20:10:12.426465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.838 [2024-12-16 20:10:12.427086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.838 [2024-12-16 20:10:12.429513] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.378 ms, result 0 00:17:04.838 [2024-12-16 20:10:12.430232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.838 [2024-12-16 20:10:12.445330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.222  [2024-12-16T20:10:14.807Z] Copying: 18/256 [MB] (18 MBps) [2024-12-16T20:10:15.754Z] Copying: 34/256 [MB] (15 MBps) [2024-12-16T20:10:16.698Z] Copying: 47/256 [MB] (12 MBps) [2024-12-16T20:10:17.640Z] Copying: 57/256 [MB] (10 MBps) [2024-12-16T20:10:18.582Z] Copying: 73/256 [MB] (16 MBps) [2024-12-16T20:10:19.525Z] Copying: 92/256 [MB] (18 MBps) [2024-12-16T20:10:20.912Z] Copying: 102/256 [MB] (10 MBps) [2024-12-16T20:10:21.856Z] Copying: 113/256 [MB] (10 MBps) [2024-12-16T20:10:22.801Z] Copying: 124/256 [MB] (10 MBps) [2024-12-16T20:10:23.745Z] Copying: 134/256 [MB] (10 MBps) [2024-12-16T20:10:24.689Z] Copying: 148/256 [MB] (13 MBps) [2024-12-16T20:10:25.636Z] Copying: 165/256 [MB] (16 MBps) [2024-12-16T20:10:26.580Z] Copying: 186/256 [MB] (21 MBps) [2024-12-16T20:10:27.524Z] Copying: 204/256 [MB] (18 MBps) [2024-12-16T20:10:28.910Z] Copying: 221/256 [MB] (16 MBps) [2024-12-16T20:10:29.483Z] Copying: 239/256 [MB] (17 MBps) [2024-12-16T20:10:30.056Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-16 20:10:29.798779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.416 [2024-12-16 20:10:29.813036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.813104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.416 [2024-12-16 20:10:29.813120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:22.416 [2024-12-16 20:10:29.813129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.813161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.416 [2024-12-16 20:10:29.816156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.816200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.416 [2024-12-16 20:10:29.816212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:17:22.416 [2024-12-16 20:10:29.816222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.816549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.816562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.416 [2024-12-16 20:10:29.816572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:22.416 [2024-12-16 20:10:29.816584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.820279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.820322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.416 [2024-12-16 20:10:29.820332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:17:22.416 [2024-12-16 20:10:29.820342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.827235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.827484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:22.416 [2024-12-16 20:10:29.827533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.855 ms 00:17:22.416 [2024-12-16 20:10:29.827543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.854009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.854061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.416 [2024-12-16 20:10:29.854075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.373 ms 00:17:22.416 [2024-12-16 20:10:29.854082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.870832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.870883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.416 [2024-12-16 20:10:29.870896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.675 ms 00:17:22.416 [2024-12-16 20:10:29.870905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.871092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.871104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.416 [2024-12-16 20:10:29.871114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:22.416 [2024-12-16 20:10:29.871122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.898158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.898389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:22.416 [2024-12-16 20:10:29.898413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.017 ms 00:17:22.416 [2024-12-16 20:10:29.898421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.924610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.924661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:22.416 [2024-12-16 20:10:29.924673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.108 ms 00:17:22.416 [2024-12-16 20:10:29.924679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.950744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.950794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.416 [2024-12-16 20:10:29.950806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.981 ms 00:17:22.416 [2024-12-16 20:10:29.950813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.976713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.416 [2024-12-16 20:10:29.976762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.416 [2024-12-16 20:10:29.976775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.781 ms 00:17:22.416 [2024-12-16 20:10:29.976782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.416 [2024-12-16 20:10:29.976863] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.416 [2024-12-16 20:10:29.976881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.416 [2024-12-16 20:10:29.976973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.976981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.976988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.976995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.417 [2024-12-16 20:10:29.977758] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.417 [2024-12-16 20:10:29.977766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9ee28d-6188-48e1-a96b-343e6273784c 00:17:22.417 [2024-12-16 20:10:29.977775] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.418 [2024-12-16 20:10:29.977783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.418 [2024-12-16 20:10:29.977792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.418 [2024-12-16 20:10:29.977799] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.418 [2024-12-16 20:10:29.977807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.418 [2024-12-16 20:10:29.977818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.418 [2024-12-16 20:10:29.977825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.418 [2024-12-16 20:10:29.977832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.418 [2024-12-16 20:10:29.977838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.418 [2024-12-16 20:10:29.977847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.418 [2024-12-16 20:10:29.977855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.418 [2024-12-16 20:10:29.977865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:17:22.418 [2024-12-16 20:10:29.977873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.418 [2024-12-16 20:10:29.991591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.418 [2024-12-16 20:10:29.991635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.418 [2024-12-16 20:10:29.991655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.694 ms 00:17:22.418 [2024-12-16 20:10:29.991662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.418 [2024-12-16 20:10:29.991907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.418 [2024-12-16 20:10:29.991917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.418 [2024-12-16 20:10:29.991926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:17:22.418 [2024-12-16 20:10:29.991933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.418 [2024-12-16 20:10:30.033831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.418 [2024-12-16 20:10:30.033883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.418 [2024-12-16 20:10:30.033902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.418 [2024-12-16 20:10:30.033910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.418 [2024-12-16 20:10:30.034006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.418 [2024-12-16 20:10:30.034015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.418 [2024-12-16 20:10:30.034023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.418 [2024-12-16 20:10:30.034032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.418 [2024-12-16 20:10:30.034083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.418 [2024-12-16 20:10:30.034094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.418 [2024-12-16 20:10:30.034102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.418 [2024-12-16 20:10:30.034114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.418 [2024-12-16 20:10:30.034134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.418 [2024-12-16 20:10:30.034142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.418 [2024-12-16 20:10:30.034150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.418 [2024-12-16 20:10:30.034158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.679 [2024-12-16 20:10:30.115137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.679 [2024-12-16 20:10:30.115197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.679 [2024-12-16 20:10:30.115217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.679 [2024-12-16 20:10:30.115225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.679 [2024-12-16 20:10:30.147325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.679 [2024-12-16 20:10:30.147373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.679 [2024-12-16 20:10:30.147386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.679 [2024-12-16 20:10:30.147395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.679 [2024-12-16 20:10:30.147457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.679 [2024-12-16 20:10:30.147467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.679 [2024-12-16 20:10:30.147476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.679 [2024-12-16 20:10:30.147485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.679 [2024-12-16 20:10:30.147542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.679 [2024-12-16 20:10:30.147552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.679 [2024-12-16 20:10:30.147560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.679 [2024-12-16 20:10:30.147569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.680 [2024-12-16 20:10:30.147673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.680 [2024-12-16 20:10:30.147684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.680 [2024-12-16 20:10:30.147693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.680 [2024-12-16 20:10:30.147700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.680 [2024-12-16 20:10:30.147737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.680 [2024-12-16 20:10:30.147747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.680 [2024-12-16 20:10:30.147756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.680 [2024-12-16 20:10:30.147764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.680 [2024-12-16 20:10:30.147809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.680 [2024-12-16 20:10:30.147818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.680 [2024-12-16 20:10:30.147827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.680 [2024-12-16 20:10:30.147836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.680 [2024-12-16 20:10:30.147888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.680 [2024-12-16 20:10:30.147902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.680 [2024-12-16 20:10:30.147910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.680 [2024-12-16 20:10:30.147917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.680 [2024-12-16 20:10:30.148077] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.057 ms, result 0 00:17:23.622 00:17:23.622 00:17:23.622 20:10:31 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:24.194 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:24.194 20:10:31 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:24.194 20:10:31 -- ftl/trim.sh@109 -- # fio_kill 00:17:24.194 20:10:31 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:24.194 20:10:31 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.194 20:10:31 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:24.194 20:10:31 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:24.194 Process with pid 72427 is not found 00:17:24.194 20:10:31 -- ftl/trim.sh@20 -- # killprocess 72427 00:17:24.194 20:10:31 -- common/autotest_common.sh@936 -- # '[' -z 72427 ']' 00:17:24.194 20:10:31 -- common/autotest_common.sh@940 -- # kill -0 72427 00:17:24.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72427) - No such process 00:17:24.194 20:10:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72427 is not found' 00:17:24.194 ************************************ 00:17:24.194 END TEST ftl_trim 00:17:24.194 ************************************ 00:17:24.194 00:17:24.194 real 1m21.732s 00:17:24.194 user 1m37.295s 00:17:24.194 sys 0m15.162s 00:17:24.194 20:10:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:24.194 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:17:24.194 20:10:31 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:24.194 20:10:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:24.194 20:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.194 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:17:24.194 ************************************ 00:17:24.194 START TEST ftl_restore 00:17:24.194 ************************************ 00:17:24.194 20:10:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:24.194 * Looking for test storage... 00:17:24.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.194 20:10:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:24.194 20:10:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:24.194 20:10:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:24.454 20:10:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:24.454 20:10:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:24.454 20:10:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:24.454 20:10:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:24.454 20:10:31 -- scripts/common.sh@335 -- # IFS=.-: 00:17:24.454 20:10:31 -- scripts/common.sh@335 -- # read -ra ver1 00:17:24.454 20:10:31 -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.454 20:10:31 -- scripts/common.sh@336 -- # read -ra ver2 00:17:24.454 20:10:31 -- scripts/common.sh@337 -- # local 'op=<' 00:17:24.454 20:10:31 -- scripts/common.sh@339 -- # ver1_l=2 00:17:24.454 20:10:31 -- scripts/common.sh@340 -- # ver2_l=1 00:17:24.454 20:10:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:24.454 20:10:31 -- scripts/common.sh@343 -- # case "$op" in 00:17:24.454 20:10:31 -- scripts/common.sh@344 -- # : 1 00:17:24.454 20:10:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:24.454 20:10:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.454 20:10:31 -- scripts/common.sh@364 -- # decimal 1 00:17:24.454 20:10:31 -- scripts/common.sh@352 -- # local d=1 00:17:24.455 20:10:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.455 20:10:31 -- scripts/common.sh@354 -- # echo 1 00:17:24.455 20:10:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:24.455 20:10:31 -- scripts/common.sh@365 -- # decimal 2 00:17:24.455 20:10:31 -- scripts/common.sh@352 -- # local d=2 00:17:24.455 20:10:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.455 20:10:31 -- scripts/common.sh@354 -- # echo 2 00:17:24.455 20:10:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:24.455 20:10:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:24.455 20:10:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:24.455 20:10:31 -- scripts/common.sh@367 -- # return 0 00:17:24.455 20:10:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.455 20:10:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:24.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.455 --rc genhtml_branch_coverage=1 00:17:24.455 --rc genhtml_function_coverage=1 00:17:24.455 --rc genhtml_legend=1 00:17:24.455 --rc geninfo_all_blocks=1 00:17:24.455 --rc geninfo_unexecuted_blocks=1 00:17:24.455 00:17:24.455 ' 00:17:24.455 20:10:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:24.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.455 --rc genhtml_branch_coverage=1 00:17:24.455 --rc genhtml_function_coverage=1 00:17:24.455 --rc genhtml_legend=1 00:17:24.455 --rc geninfo_all_blocks=1 00:17:24.455 --rc geninfo_unexecuted_blocks=1 00:17:24.455 00:17:24.455 ' 00:17:24.455 20:10:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:24.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.455 --rc genhtml_branch_coverage=1 00:17:24.455 --rc genhtml_function_coverage=1 00:17:24.455 --rc genhtml_legend=1 00:17:24.455 --rc geninfo_all_blocks=1 00:17:24.455 --rc geninfo_unexecuted_blocks=1 00:17:24.455 00:17:24.455 ' 00:17:24.455 20:10:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:24.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.455 --rc genhtml_branch_coverage=1 00:17:24.455 --rc genhtml_function_coverage=1 00:17:24.455 --rc genhtml_legend=1 00:17:24.455 --rc geninfo_all_blocks=1 00:17:24.455 --rc geninfo_unexecuted_blocks=1 00:17:24.455 00:17:24.455 ' 00:17:24.455 20:10:31 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:24.455 20:10:31 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:24.455 20:10:31 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.455 20:10:31 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.455 20:10:31 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:24.455 20:10:31 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:24.455 20:10:31 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.455 20:10:31 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:24.455 20:10:31 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:24.455 20:10:31 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.455 20:10:31 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.455 20:10:31 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:24.455 20:10:31 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:24.455 20:10:31 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.455 20:10:31 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.455 20:10:31 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:24.455 20:10:31 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:24.455 20:10:31 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.455 20:10:31 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.455 20:10:31 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:24.455 20:10:31 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:24.455 20:10:31 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.455 20:10:31 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.455 20:10:31 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.455 20:10:31 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.455 20:10:31 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:24.455 20:10:31 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:24.455 20:10:31 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.455 20:10:31 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.455 20:10:31 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.455 20:10:31 -- ftl/restore.sh@13 -- # mktemp -d 00:17:24.455 20:10:31 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.WKIb9STJDH 00:17:24.455 20:10:31 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:24.455 20:10:31 -- ftl/restore.sh@16 -- # case $opt in 00:17:24.455 20:10:31 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:24.455 20:10:31 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:24.455 20:10:31 -- ftl/restore.sh@23 -- # shift 2 00:17:24.455 20:10:31 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:24.455 20:10:31 -- ftl/restore.sh@25 -- # timeout=240 00:17:24.455 20:10:31 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:24.455 20:10:31 -- ftl/restore.sh@39 -- # svcpid=72767 00:17:24.455 20:10:31 -- ftl/restore.sh@41 -- # waitforlisten 72767 00:17:24.455 20:10:31 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.455 20:10:31 -- common/autotest_common.sh@829 -- # '[' -z 72767 ']' 00:17:24.455 20:10:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.455 20:10:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.455 20:10:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.455 20:10:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.455 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:17:24.455 [2024-12-16 20:10:32.008444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.455 [2024-12-16 20:10:32.008843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72767 ] 00:17:24.716 [2024-12-16 20:10:32.164800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.015 [2024-12-16 20:10:32.387009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.015 [2024-12-16 20:10:32.387553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.972 20:10:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.972 20:10:33 -- common/autotest_common.sh@862 -- # return 0 00:17:25.972 20:10:33 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:25.972 20:10:33 -- ftl/common.sh@54 -- # local name=nvme0 00:17:25.972 20:10:33 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:25.972 20:10:33 -- ftl/common.sh@56 -- # local size=103424 00:17:25.972 20:10:33 -- ftl/common.sh@59 -- # local base_bdev 00:17:25.972 20:10:33 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:26.233 20:10:33 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:26.233 20:10:33 -- ftl/common.sh@62 -- # local base_size 00:17:26.233 20:10:33 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:26.233 20:10:33 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:26.233 20:10:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:26.233 20:10:33 -- common/autotest_common.sh@1369 -- # local bs 00:17:26.233 20:10:33 -- common/autotest_common.sh@1370 -- # local nb 00:17:26.233 20:10:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:26.495 20:10:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:26.495 { 00:17:26.495 "name": "nvme0n1", 00:17:26.495 "aliases": [ 00:17:26.495 "d49c8afd-f8da-4dc3-a801-181ad8e6f6d5" 00:17:26.495 ], 00:17:26.495 "product_name": "NVMe disk", 00:17:26.495 "block_size": 4096, 00:17:26.495 "num_blocks": 1310720, 00:17:26.495 "uuid": "d49c8afd-f8da-4dc3-a801-181ad8e6f6d5", 00:17:26.495 "assigned_rate_limits": { 00:17:26.495 "rw_ios_per_sec": 0, 00:17:26.495 "rw_mbytes_per_sec": 0, 00:17:26.495 "r_mbytes_per_sec": 0, 00:17:26.495 "w_mbytes_per_sec": 0 00:17:26.495 }, 00:17:26.495 "claimed": true, 00:17:26.495 "claim_type": "read_many_write_one", 00:17:26.495 "zoned": false, 00:17:26.495 "supported_io_types": { 00:17:26.495 "read": true, 00:17:26.495 "write": true, 00:17:26.495 "unmap": true, 00:17:26.495 "write_zeroes": true, 00:17:26.495 "flush": true, 00:17:26.495 "reset": true, 00:17:26.495 "compare": true, 00:17:26.495 "compare_and_write": false, 00:17:26.495 "abort": true, 00:17:26.495 "nvme_admin": true, 00:17:26.495 "nvme_io": true 00:17:26.495 }, 00:17:26.495 "driver_specific": { 00:17:26.495 "nvme": [ 00:17:26.495 { 00:17:26.495 "pci_address": "0000:00:07.0", 00:17:26.495 "trid": { 00:17:26.495 "trtype": "PCIe", 00:17:26.495 "traddr": "0000:00:07.0" 00:17:26.495 }, 00:17:26.495 "ctrlr_data": { 00:17:26.495 "cntlid": 0, 00:17:26.495 "vendor_id": "0x1b36", 00:17:26.495 "model_number": "QEMU NVMe Ctrl", 00:17:26.495 "serial_number": "12341", 00:17:26.495 "firmware_revision": "8.0.0", 00:17:26.495 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:26.495 "oacs": { 00:17:26.495 "security": 0, 00:17:26.495 "format": 1, 00:17:26.495 "firmware": 0, 00:17:26.495 "ns_manage": 1 00:17:26.495 }, 00:17:26.495 "multi_ctrlr": false, 00:17:26.495 "ana_reporting": false 00:17:26.495 }, 00:17:26.495 "vs": { 00:17:26.495 "nvme_version": "1.4" 00:17:26.495 }, 00:17:26.495 "ns_data": { 00:17:26.495 "id": 1, 00:17:26.495 "can_share": false 00:17:26.495 } 00:17:26.495 } 00:17:26.495 ], 00:17:26.495 "mp_policy": "active_passive" 00:17:26.495 } 00:17:26.495 } 00:17:26.495 ]' 00:17:26.495 20:10:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:26.495 20:10:34 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:26.495 20:10:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:26.495 20:10:34 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:26.495 20:10:34 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:26.495 20:10:34 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:26.495 20:10:34 -- ftl/common.sh@63 -- # base_size=5120 00:17:26.495 20:10:34 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:26.495 20:10:34 -- ftl/common.sh@67 -- # clear_lvols 00:17:26.495 20:10:34 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:26.495 20:10:34 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:26.757 20:10:34 -- ftl/common.sh@28 -- # stores=8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188 00:17:26.757 20:10:34 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:26.757 20:10:34 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a6a01d7-8c2a-4e5a-a30f-3b2e2f46b188 00:17:27.018 20:10:34 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:27.279 20:10:34 -- ftl/common.sh@68 -- # lvs=5bb754d4-b3c8-42c4-ad97-fa9d29a7745a 00:17:27.279 20:10:34 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5bb754d4-b3c8-42c4-ad97-fa9d29a7745a 00:17:27.279 20:10:34 -- ftl/restore.sh@43 -- # split_bdev=29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.279 20:10:34 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:27.279 20:10:34 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.279 20:10:34 -- ftl/common.sh@35 -- # local name=nvc0 00:17:27.279 20:10:34 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:27.279 20:10:34 -- ftl/common.sh@37 -- # local base_bdev=29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.279 20:10:34 -- ftl/common.sh@38 -- # local cache_size= 00:17:27.279 20:10:34 -- ftl/common.sh@41 -- # get_bdev_size 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.279 20:10:34 -- common/autotest_common.sh@1367 -- # local bdev_name=29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.279 20:10:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:27.279 20:10:34 -- common/autotest_common.sh@1369 -- # local bs 00:17:27.279 20:10:34 -- common/autotest_common.sh@1370 -- # local nb 00:17:27.542 20:10:34 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.542 20:10:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:27.542 { 00:17:27.542 "name": "29f66bab-d7d6-416a-a4c4-1ccc8209ef61", 00:17:27.542 "aliases": [ 00:17:27.542 "lvs/nvme0n1p0" 00:17:27.542 ], 00:17:27.542 "product_name": "Logical Volume", 00:17:27.542 "block_size": 4096, 00:17:27.542 "num_blocks": 26476544, 00:17:27.542 "uuid": "29f66bab-d7d6-416a-a4c4-1ccc8209ef61", 00:17:27.542 "assigned_rate_limits": { 00:17:27.542 "rw_ios_per_sec": 0, 00:17:27.542 "rw_mbytes_per_sec": 0, 00:17:27.542 "r_mbytes_per_sec": 0, 00:17:27.542 "w_mbytes_per_sec": 0 00:17:27.542 }, 00:17:27.542 "claimed": false, 00:17:27.542 "zoned": false, 00:17:27.542 "supported_io_types": { 00:17:27.542 "read": true, 00:17:27.542 "write": true, 00:17:27.542 "unmap": true, 00:17:27.542 "write_zeroes": true, 00:17:27.542 "flush": false, 00:17:27.542 "reset": true, 00:17:27.542 "compare": false, 00:17:27.542 "compare_and_write": false, 00:17:27.542 "abort": false, 00:17:27.542 "nvme_admin": false, 00:17:27.542 "nvme_io": false 00:17:27.542 }, 00:17:27.542 "driver_specific": { 00:17:27.542 "lvol": { 00:17:27.542 "lvol_store_uuid": "5bb754d4-b3c8-42c4-ad97-fa9d29a7745a", 00:17:27.542 "base_bdev": "nvme0n1", 00:17:27.542 "thin_provision": true, 00:17:27.542 "snapshot": false, 00:17:27.542 "clone": false, 00:17:27.542 "esnap_clone": false 00:17:27.542 } 00:17:27.542 } 00:17:27.542 } 00:17:27.542 ]' 00:17:27.542 20:10:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:27.542 20:10:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:27.542 20:10:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:27.803 20:10:35 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:27.803 20:10:35 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:27.803 20:10:35 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:27.803 20:10:35 -- ftl/common.sh@41 -- # local base_size=5171 00:17:27.803 20:10:35 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:27.803 20:10:35 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:27.803 20:10:35 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:27.803 20:10:35 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:27.803 20:10:35 -- ftl/common.sh@48 -- # get_bdev_size 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.803 20:10:35 -- common/autotest_common.sh@1367 -- # local bdev_name=29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:27.803 20:10:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:27.803 20:10:35 -- common/autotest_common.sh@1369 -- # local bs 00:17:27.803 20:10:35 -- common/autotest_common.sh@1370 -- # local nb 00:17:27.803 20:10:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:28.064 20:10:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:28.064 { 00:17:28.064 "name": "29f66bab-d7d6-416a-a4c4-1ccc8209ef61", 00:17:28.064 "aliases": [ 00:17:28.064 "lvs/nvme0n1p0" 00:17:28.064 ], 00:17:28.064 "product_name": "Logical Volume", 00:17:28.064 "block_size": 4096, 00:17:28.064 "num_blocks": 26476544, 00:17:28.064 "uuid": "29f66bab-d7d6-416a-a4c4-1ccc8209ef61", 00:17:28.064 "assigned_rate_limits": { 00:17:28.064 "rw_ios_per_sec": 0, 00:17:28.064 "rw_mbytes_per_sec": 0, 00:17:28.064 "r_mbytes_per_sec": 0, 00:17:28.064 "w_mbytes_per_sec": 0 00:17:28.064 }, 00:17:28.064 "claimed": false, 00:17:28.064 "zoned": false, 00:17:28.064 "supported_io_types": { 00:17:28.064 "read": true, 00:17:28.064 "write": true, 00:17:28.064 "unmap": true, 00:17:28.064 "write_zeroes": true, 00:17:28.064 "flush": false, 00:17:28.064 "reset": true, 00:17:28.064 "compare": false, 00:17:28.064 "compare_and_write": false, 00:17:28.064 "abort": false, 00:17:28.064 "nvme_admin": false, 00:17:28.064 "nvme_io": false 00:17:28.064 }, 00:17:28.064 "driver_specific": { 00:17:28.064 "lvol": { 00:17:28.064 "lvol_store_uuid": "5bb754d4-b3c8-42c4-ad97-fa9d29a7745a", 00:17:28.064 "base_bdev": "nvme0n1", 00:17:28.064 "thin_provision": true, 00:17:28.064 "snapshot": false, 00:17:28.064 "clone": false, 00:17:28.064 "esnap_clone": false 00:17:28.064 } 00:17:28.064 } 00:17:28.064 } 00:17:28.064 ]' 00:17:28.064 20:10:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:28.064 20:10:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:28.064 20:10:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:28.064 20:10:35 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:28.064 20:10:35 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:28.064 20:10:35 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:28.064 20:10:35 -- ftl/common.sh@48 -- # cache_size=5171 00:17:28.064 20:10:35 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:28.326 20:10:35 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:28.326 20:10:35 -- ftl/restore.sh@48 -- # get_bdev_size 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:28.326 20:10:35 -- common/autotest_common.sh@1367 -- # local bdev_name=29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:28.326 20:10:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:28.326 20:10:35 -- common/autotest_common.sh@1369 -- # local bs 00:17:28.326 20:10:35 -- common/autotest_common.sh@1370 -- # local nb 00:17:28.326 20:10:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 00:17:28.586 20:10:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:28.586 { 00:17:28.586 "name": "29f66bab-d7d6-416a-a4c4-1ccc8209ef61", 00:17:28.586 "aliases": [ 00:17:28.586 "lvs/nvme0n1p0" 00:17:28.586 ], 00:17:28.586 "product_name": "Logical Volume", 00:17:28.586 "block_size": 4096, 00:17:28.586 "num_blocks": 26476544, 00:17:28.586 "uuid": "29f66bab-d7d6-416a-a4c4-1ccc8209ef61", 00:17:28.586 "assigned_rate_limits": { 00:17:28.586 "rw_ios_per_sec": 0, 00:17:28.586 "rw_mbytes_per_sec": 0, 00:17:28.586 "r_mbytes_per_sec": 0, 00:17:28.586 "w_mbytes_per_sec": 0 00:17:28.586 }, 00:17:28.586 "claimed": false, 00:17:28.586 "zoned": false, 00:17:28.586 "supported_io_types": { 00:17:28.586 "read": true, 00:17:28.586 "write": true, 00:17:28.586 "unmap": true, 00:17:28.586 "write_zeroes": true, 00:17:28.586 "flush": false, 00:17:28.586 "reset": true, 00:17:28.586 "compare": false, 00:17:28.586 "compare_and_write": false, 00:17:28.586 "abort": false, 00:17:28.586 "nvme_admin": false, 00:17:28.586 "nvme_io": false 00:17:28.586 }, 00:17:28.586 "driver_specific": { 00:17:28.586 "lvol": { 00:17:28.586 "lvol_store_uuid": "5bb754d4-b3c8-42c4-ad97-fa9d29a7745a", 00:17:28.586 "base_bdev": "nvme0n1", 00:17:28.586 "thin_provision": true, 00:17:28.586 "snapshot": false, 00:17:28.586 "clone": false, 00:17:28.586 "esnap_clone": false 00:17:28.586 } 00:17:28.586 } 00:17:28.586 } 00:17:28.586 ]' 00:17:28.586 20:10:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:28.586 20:10:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:28.586 20:10:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:28.586 20:10:36 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:28.586 20:10:36 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:28.586 20:10:36 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:28.586 20:10:36 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:28.586 20:10:36 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 --l2p_dram_limit 10' 00:17:28.586 20:10:36 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:28.586 20:10:36 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:28.586 20:10:36 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:28.586 20:10:36 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:28.586 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:28.586 20:10:36 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 29f66bab-d7d6-416a-a4c4-1ccc8209ef61 --l2p_dram_limit 10 -c nvc0n1p0 00:17:28.848 [2024-12-16 20:10:36.337668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.337823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.848 [2024-12-16 20:10:36.337881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.848 [2024-12-16 20:10:36.337903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.337977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.337997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.848 [2024-12-16 20:10:36.338015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:28.848 [2024-12-16 20:10:36.338031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.338061] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.848 [2024-12-16 20:10:36.338765] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.848 [2024-12-16 20:10:36.338877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.338929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.848 [2024-12-16 20:10:36.338951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:17:28.848 [2024-12-16 20:10:36.338972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.339105] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a332268c-feec-4876-952a-9f7e0b8fbbf4 00:17:28.848 [2024-12-16 20:10:36.340401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.340506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:28.848 [2024-12-16 20:10:36.340561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:28.848 [2024-12-16 20:10:36.340582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.346694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.346804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.848 [2024-12-16 20:10:36.346858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.063 ms 00:17:28.848 [2024-12-16 20:10:36.346879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.346967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.347289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.848 [2024-12-16 20:10:36.347392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:28.848 [2024-12-16 20:10:36.347421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.347536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.347572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.848 [2024-12-16 20:10:36.347681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:28.848 [2024-12-16 20:10:36.347702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.347744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.848 [2024-12-16 20:10:36.351066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.351170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.848 [2024-12-16 20:10:36.351221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.330 ms 00:17:28.848 [2024-12-16 20:10:36.351240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.351282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.351316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.848 [2024-12-16 20:10:36.351336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.848 [2024-12-16 20:10:36.351351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.351389] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:28.848 [2024-12-16 20:10:36.351496] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:28.848 [2024-12-16 20:10:36.351806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.848 [2024-12-16 20:10:36.351834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:28.848 [2024-12-16 20:10:36.351860] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.848 [2024-12-16 20:10:36.351884] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.848 [2024-12-16 20:10:36.351909] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:28.848 [2024-12-16 20:10:36.351931] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.848 [2024-12-16 20:10:36.351948] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:28.848 [2024-12-16 20:10:36.351962] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:28.848 [2024-12-16 20:10:36.351979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.352114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.848 [2024-12-16 20:10:36.352134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:17:28.848 [2024-12-16 20:10:36.352149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.352211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.848 [2024-12-16 20:10:36.352227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.848 [2024-12-16 20:10:36.352244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:28.848 [2024-12-16 20:10:36.352323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.848 [2024-12-16 20:10:36.352402] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.848 [2024-12-16 20:10:36.352421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.848 [2024-12-16 20:10:36.352465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.848 [2024-12-16 20:10:36.352511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.848 [2024-12-16 20:10:36.352528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.848 [2024-12-16 20:10:36.352571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.848 [2024-12-16 20:10:36.352590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:28.848 [2024-12-16 20:10:36.352605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.848 [2024-12-16 20:10:36.352621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:28.848 [2024-12-16 20:10:36.352636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.848 [2024-12-16 20:10:36.352652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.848 [2024-12-16 20:10:36.352665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:28.848 [2024-12-16 20:10:36.352683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.849 [2024-12-16 20:10:36.352733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.849 [2024-12-16 20:10:36.352752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:28.849 [2024-12-16 20:10:36.352766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.849 [2024-12-16 20:10:36.352784] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.849 [2024-12-16 20:10:36.352798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:28.849 [2024-12-16 20:10:36.352813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.849 [2024-12-16 20:10:36.352826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:28.849 [2024-12-16 20:10:36.352842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:28.849 [2024-12-16 20:10:36.352856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:28.849 [2024-12-16 20:10:36.352916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.849 [2024-12-16 20:10:36.352933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:28.849 [2024-12-16 20:10:36.352949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.849 [2024-12-16 20:10:36.352963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.849 [2024-12-16 20:10:36.352978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:28.849 [2024-12-16 20:10:36.352991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.849 [2024-12-16 20:10:36.353008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.849 [2024-12-16 20:10:36.353021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:28.849 [2024-12-16 20:10:36.353036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.849 [2024-12-16 20:10:36.353050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.849 [2024-12-16 20:10:36.353125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:28.849 [2024-12-16 20:10:36.353138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.849 [2024-12-16 20:10:36.353154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.849 [2024-12-16 20:10:36.353167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:28.849 [2024-12-16 20:10:36.353182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.849 [2024-12-16 20:10:36.353196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.849 [2024-12-16 20:10:36.353212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:28.849 [2024-12-16 20:10:36.353226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.849 [2024-12-16 20:10:36.353268] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.849 [2024-12-16 20:10:36.353323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.849 [2024-12-16 20:10:36.353343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.849 [2024-12-16 20:10:36.353381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.849 [2024-12-16 20:10:36.353403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.849 [2024-12-16 20:10:36.353417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.849 [2024-12-16 20:10:36.353433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.849 [2024-12-16 20:10:36.353448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.849 [2024-12-16 20:10:36.353466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.849 [2024-12-16 20:10:36.353480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.849 [2024-12-16 20:10:36.353528] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.849 [2024-12-16 20:10:36.353555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.849 [2024-12-16 20:10:36.353580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:28.849 [2024-12-16 20:10:36.353603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:28.849 [2024-12-16 20:10:36.353626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:28.849 [2024-12-16 20:10:36.353674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:28.849 [2024-12-16 20:10:36.353698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:28.849 [2024-12-16 20:10:36.353746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:28.849 [2024-12-16 20:10:36.353771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:28.849 [2024-12-16 20:10:36.353812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:28.849 [2024-12-16 20:10:36.353838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:28.849 [2024-12-16 20:10:36.353860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:28.849 [2024-12-16 20:10:36.353884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:28.849 [2024-12-16 20:10:36.353907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:28.849 [2024-12-16 20:10:36.354198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:28.849 [2024-12-16 20:10:36.354382] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.849 [2024-12-16 20:10:36.354559] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.849 [2024-12-16 20:10:36.354593] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.849 [2024-12-16 20:10:36.354619] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.849 [2024-12-16 20:10:36.354640] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.849 [2024-12-16 20:10:36.354664] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.849 [2024-12-16 20:10:36.354690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.354716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.849 [2024-12-16 20:10:36.354741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:17:28.849 [2024-12-16 20:10:36.354766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.376655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.376699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.849 [2024-12-16 20:10:36.376710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.754 ms 00:17:28.849 [2024-12-16 20:10:36.376719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.376805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.376818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.849 [2024-12-16 20:10:36.376829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:28.849 [2024-12-16 20:10:36.376839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.409552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.409594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.849 [2024-12-16 20:10:36.409605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.665 ms 00:17:28.849 [2024-12-16 20:10:36.409615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.409648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.409658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.849 [2024-12-16 20:10:36.409666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.849 [2024-12-16 20:10:36.409677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.410183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.410208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.849 [2024-12-16 20:10:36.410217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:17:28.849 [2024-12-16 20:10:36.410227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.410377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.410392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.849 [2024-12-16 20:10:36.410401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:28.849 [2024-12-16 20:10:36.410410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.428174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.428218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.849 [2024-12-16 20:10:36.428229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.744 ms 00:17:28.849 [2024-12-16 20:10:36.428239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.849 [2024-12-16 20:10:36.441370] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:28.849 [2024-12-16 20:10:36.445099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.849 [2024-12-16 20:10:36.445141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.849 [2024-12-16 20:10:36.445155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.749 ms 00:17:28.849 [2024-12-16 20:10:36.445163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.111 [2024-12-16 20:10:36.544063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.111 [2024-12-16 20:10:36.544128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:29.111 [2024-12-16 20:10:36.544147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.862 ms 00:17:29.111 [2024-12-16 20:10:36.544156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.111 [2024-12-16 20:10:36.544216] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:29.111 [2024-12-16 20:10:36.544229] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:33.319 [2024-12-16 20:10:40.491073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.491165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:33.319 [2024-12-16 20:10:40.491187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3946.832 ms 00:17:33.319 [2024-12-16 20:10:40.491197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.491453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.491468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:33.319 [2024-12-16 20:10:40.491486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:33.319 [2024-12-16 20:10:40.491494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.518451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.518509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:33.319 [2024-12-16 20:10:40.518527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.879 ms 00:17:33.319 [2024-12-16 20:10:40.518536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.544567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.544615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:33.319 [2024-12-16 20:10:40.544635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.970 ms 00:17:33.319 [2024-12-16 20:10:40.544643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.544997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.545008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:33.319 [2024-12-16 20:10:40.545020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:33.319 [2024-12-16 20:10:40.545027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.616270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.616335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:33.319 [2024-12-16 20:10:40.616354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.181 ms 00:17:33.319 [2024-12-16 20:10:40.616363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.644439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.644491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:33.319 [2024-12-16 20:10:40.644507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.016 ms 00:17:33.319 [2024-12-16 20:10:40.644515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.646000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.646049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:33.319 [2024-12-16 20:10:40.646065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:17:33.319 [2024-12-16 20:10:40.646074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.672792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.672841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:33.319 [2024-12-16 20:10:40.672856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.651 ms 00:17:33.319 [2024-12-16 20:10:40.672863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.672927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.672938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:33.319 [2024-12-16 20:10:40.672949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:33.319 [2024-12-16 20:10:40.672957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.673057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.319 [2024-12-16 20:10:40.673067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:33.319 [2024-12-16 20:10:40.673079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:33.319 [2024-12-16 20:10:40.673087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.319 [2024-12-16 20:10:40.674258] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4336.081 ms, result 0 00:17:33.319 { 00:17:33.319 "name": "ftl0", 00:17:33.319 "uuid": "a332268c-feec-4876-952a-9f7e0b8fbbf4" 00:17:33.319 } 00:17:33.319 20:10:40 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:33.319 20:10:40 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:33.319 20:10:40 -- ftl/restore.sh@63 -- # echo ']}' 00:17:33.319 20:10:40 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:33.580 [2024-12-16 20:10:41.089588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.089660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:33.580 [2024-12-16 20:10:41.089675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.580 [2024-12-16 20:10:41.089685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.089712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:33.580 [2024-12-16 20:10:41.092765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.092810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:33.580 [2024-12-16 20:10:41.092825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.029 ms 00:17:33.580 [2024-12-16 20:10:41.092841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.093119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.093129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:33.580 [2024-12-16 20:10:41.093141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:33.580 [2024-12-16 20:10:41.093149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.096423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.096445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:33.580 [2024-12-16 20:10:41.096458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:17:33.580 [2024-12-16 20:10:41.096466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.102590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.102632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:33.580 [2024-12-16 20:10:41.102646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:17:33.580 [2024-12-16 20:10:41.102653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.130587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.130806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:33.580 [2024-12-16 20:10:41.130837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.834 ms 00:17:33.580 [2024-12-16 20:10:41.130846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.148514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.148566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:33.580 [2024-12-16 20:10:41.148583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.589 ms 00:17:33.580 [2024-12-16 20:10:41.148591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.148767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.148779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:33.580 [2024-12-16 20:10:41.148792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:33.580 [2024-12-16 20:10:41.148803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.175205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.175254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:33.580 [2024-12-16 20:10:41.175269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.372 ms 00:17:33.580 [2024-12-16 20:10:41.175276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-12-16 20:10:41.200604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-12-16 20:10:41.200652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:33.580 [2024-12-16 20:10:41.200666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.252 ms 00:17:33.580 [2024-12-16 20:10:41.200674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.843 [2024-12-16 20:10:41.225644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.843 [2024-12-16 20:10:41.225824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:33.843 [2024-12-16 20:10:41.225851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:17:33.843 [2024-12-16 20:10:41.225858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.843 [2024-12-16 20:10:41.251014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.843 [2024-12-16 20:10:41.251071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:33.843 [2024-12-16 20:10:41.251086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.049 ms 00:17:33.843 [2024-12-16 20:10:41.251093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.843 [2024-12-16 20:10:41.251147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:33.843 [2024-12-16 20:10:41.251167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:33.843 [2024-12-16 20:10:41.251419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.251991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:33.844 [2024-12-16 20:10:41.252105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:33.844 [2024-12-16 20:10:41.252115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a332268c-feec-4876-952a-9f7e0b8fbbf4 00:17:33.844 [2024-12-16 20:10:41.252123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:33.844 [2024-12-16 20:10:41.252133] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:33.844 [2024-12-16 20:10:41.252141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:33.844 [2024-12-16 20:10:41.252151] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:33.844 [2024-12-16 20:10:41.252158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:33.844 [2024-12-16 20:10:41.252167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:33.844 [2024-12-16 20:10:41.252174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:33.844 [2024-12-16 20:10:41.252182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:33.844 [2024-12-16 20:10:41.252188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:33.844 [2024-12-16 20:10:41.252200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.844 [2024-12-16 20:10:41.252207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:33.844 [2024-12-16 20:10:41.252219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:17:33.844 [2024-12-16 20:10:41.252227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.844 [2024-12-16 20:10:41.265893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.844 [2024-12-16 20:10:41.265937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:33.844 [2024-12-16 20:10:41.265951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:17:33.844 [2024-12-16 20:10:41.265958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.844 [2024-12-16 20:10:41.266191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.844 [2024-12-16 20:10:41.266203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:33.845 [2024-12-16 20:10:41.266214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:33.845 [2024-12-16 20:10:41.266222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.315863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.315911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.845 [2024-12-16 20:10:41.315925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.315934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.316017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.316028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.845 [2024-12-16 20:10:41.316039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.316047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.316127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.316138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.845 [2024-12-16 20:10:41.316149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.316156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.316177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.316185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.845 [2024-12-16 20:10:41.316197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.316204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.398414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.398467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.845 [2024-12-16 20:10:41.398483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.398492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.845 [2024-12-16 20:10:41.430206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.845 [2024-12-16 20:10:41.430341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.845 [2024-12-16 20:10:41.430421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.845 [2024-12-16 20:10:41.430560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:33.845 [2024-12-16 20:10:41.430624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.845 [2024-12-16 20:10:41.430699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.845 [2024-12-16 20:10:41.430767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.845 [2024-12-16 20:10:41.430777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.845 [2024-12-16 20:10:41.430788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.845 [2024-12-16 20:10:41.430937] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.307 ms, result 0 00:17:33.845 true 00:17:33.845 20:10:41 -- ftl/restore.sh@66 -- # killprocess 72767 00:17:33.845 20:10:41 -- common/autotest_common.sh@936 -- # '[' -z 72767 ']' 00:17:33.845 20:10:41 -- common/autotest_common.sh@940 -- # kill -0 72767 00:17:33.845 20:10:41 -- common/autotest_common.sh@941 -- # uname 00:17:33.845 20:10:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.845 20:10:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72767 00:17:34.106 killing process with pid 72767 00:17:34.106 20:10:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.106 20:10:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.106 20:10:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72767' 00:17:34.106 20:10:41 -- common/autotest_common.sh@955 -- # kill 72767 00:17:34.106 20:10:41 -- common/autotest_common.sh@960 -- # wait 72767 00:17:38.315 20:10:45 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:41.620 262144+0 records in 00:17:41.620 262144+0 records out 00:17:41.620 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.8857 s, 276 MB/s 00:17:41.620 20:10:49 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:43.536 20:10:51 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.536 [2024-12-16 20:10:51.110149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.536 [2024-12-16 20:10:51.110431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73021 ] 00:17:43.796 [2024-12-16 20:10:51.258631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.057 [2024-12-16 20:10:51.478106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.318 [2024-12-16 20:10:51.763630] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.318 [2024-12-16 20:10:51.763990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.318 [2024-12-16 20:10:51.920423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.318 [2024-12-16 20:10:51.920479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.318 [2024-12-16 20:10:51.920494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.318 [2024-12-16 20:10:51.920506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.318 [2024-12-16 20:10:51.920561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.318 [2024-12-16 20:10:51.920572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.318 [2024-12-16 20:10:51.920580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:44.318 [2024-12-16 20:10:51.920588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.318 [2024-12-16 20:10:51.920608] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.318 [2024-12-16 20:10:51.921706] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.318 [2024-12-16 20:10:51.921845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.318 [2024-12-16 20:10:51.921973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.319 [2024-12-16 20:10:51.921999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:17:44.319 [2024-12-16 20:10:51.922019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.924198] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.319 [2024-12-16 20:10:51.938956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.939149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.319 [2024-12-16 20:10:51.939625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.762 ms 00:17:44.319 [2024-12-16 20:10:51.939654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.939746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.939759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.319 [2024-12-16 20:10:51.939769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:44.319 [2024-12-16 20:10:51.939776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.949279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.949503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.319 [2024-12-16 20:10:51.949527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.408 ms 00:17:44.319 [2024-12-16 20:10:51.949536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.949641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.949653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.319 [2024-12-16 20:10:51.949662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:44.319 [2024-12-16 20:10:51.949672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.949722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.949732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.319 [2024-12-16 20:10:51.949742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:44.319 [2024-12-16 20:10:51.949749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.949797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.319 [2024-12-16 20:10:51.954017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.954058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.319 [2024-12-16 20:10:51.954070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.235 ms 00:17:44.319 [2024-12-16 20:10:51.954078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.954118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.954126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.319 [2024-12-16 20:10:51.954135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:44.319 [2024-12-16 20:10:51.954146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.954185] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.319 [2024-12-16 20:10:51.954208] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:44.319 [2024-12-16 20:10:51.954245] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.319 [2024-12-16 20:10:51.954262] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:44.319 [2024-12-16 20:10:51.954366] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:44.319 [2024-12-16 20:10:51.954379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.319 [2024-12-16 20:10:51.954392] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:44.319 [2024-12-16 20:10:51.954406] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954416] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954425] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.319 [2024-12-16 20:10:51.954434] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.319 [2024-12-16 20:10:51.954443] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:44.319 [2024-12-16 20:10:51.954454] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:44.319 [2024-12-16 20:10:51.954463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.954473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.319 [2024-12-16 20:10:51.954483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:17:44.319 [2024-12-16 20:10:51.954490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.954553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.319 [2024-12-16 20:10:51.954564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.319 [2024-12-16 20:10:51.954573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:44.319 [2024-12-16 20:10:51.954581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.319 [2024-12-16 20:10:51.954653] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.319 [2024-12-16 20:10:51.954665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.319 [2024-12-16 20:10:51.954673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.319 [2024-12-16 20:10:51.954698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.319 [2024-12-16 20:10:51.954720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.319 [2024-12-16 20:10:51.954735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.319 [2024-12-16 20:10:51.954741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.319 [2024-12-16 20:10:51.954748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.319 [2024-12-16 20:10:51.954756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.319 [2024-12-16 20:10:51.954765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:44.319 [2024-12-16 20:10:51.954772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.319 [2024-12-16 20:10:51.954796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:44.319 [2024-12-16 20:10:51.954803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:44.319 [2024-12-16 20:10:51.954816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:44.319 [2024-12-16 20:10:51.954822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.319 [2024-12-16 20:10:51.954835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.319 [2024-12-16 20:10:51.954858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.319 [2024-12-16 20:10:51.954876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.319 [2024-12-16 20:10:51.954898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.319 [2024-12-16 20:10:51.954917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.319 [2024-12-16 20:10:51.954931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.319 [2024-12-16 20:10:51.954939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:44.319 [2024-12-16 20:10:51.954945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.319 [2024-12-16 20:10:51.954951] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.319 [2024-12-16 20:10:51.954962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.319 [2024-12-16 20:10:51.954969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.319 [2024-12-16 20:10:51.954977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.319 [2024-12-16 20:10:51.954984] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.319 [2024-12-16 20:10:51.954991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.319 [2024-12-16 20:10:51.954999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.319 [2024-12-16 20:10:51.955006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.319 [2024-12-16 20:10:51.955015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.319 [2024-12-16 20:10:51.955023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.319 [2024-12-16 20:10:51.955030] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.319 [2024-12-16 20:10:51.955040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.319 [2024-12-16 20:10:51.955050] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.319 [2024-12-16 20:10:51.955059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:44.319 [2024-12-16 20:10:51.955065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:44.319 [2024-12-16 20:10:51.955072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:44.319 [2024-12-16 20:10:51.955080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:44.319 [2024-12-16 20:10:51.955088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:44.319 [2024-12-16 20:10:51.955096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:44.319 [2024-12-16 20:10:51.955103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:44.319 [2024-12-16 20:10:51.955110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:44.319 [2024-12-16 20:10:51.955117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:44.319 [2024-12-16 20:10:51.955126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:44.319 [2024-12-16 20:10:51.955133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:44.319 [2024-12-16 20:10:51.955141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:44.319 [2024-12-16 20:10:51.955148] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.319 [2024-12-16 20:10:51.955156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.319 [2024-12-16 20:10:51.955166] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.320 [2024-12-16 20:10:51.955174] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.320 [2024-12-16 20:10:51.955181] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.320 [2024-12-16 20:10:51.955187] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.320 [2024-12-16 20:10:51.955194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.320 [2024-12-16 20:10:51.955202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.320 [2024-12-16 20:10:51.955210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:17:44.320 [2024-12-16 20:10:51.955217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:51.973733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:51.973784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.581 [2024-12-16 20:10:51.973798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.473 ms 00:17:44.581 [2024-12-16 20:10:51.973814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:51.973908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:51.973917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.581 [2024-12-16 20:10:51.973927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:44.581 [2024-12-16 20:10:51.973934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.018653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.018709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.581 [2024-12-16 20:10:52.018722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.663 ms 00:17:44.581 [2024-12-16 20:10:52.018730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.018782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.018793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.581 [2024-12-16 20:10:52.018801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.581 [2024-12-16 20:10:52.018810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.019409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.019442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.581 [2024-12-16 20:10:52.019454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:44.581 [2024-12-16 20:10:52.019469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.019630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.019654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.581 [2024-12-16 20:10:52.019664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:44.581 [2024-12-16 20:10:52.019672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.036361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.036560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.581 [2024-12-16 20:10:52.036580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.663 ms 00:17:44.581 [2024-12-16 20:10:52.036589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.051211] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:44.581 [2024-12-16 20:10:52.051260] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.581 [2024-12-16 20:10:52.051275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.051284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.581 [2024-12-16 20:10:52.051295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.568 ms 00:17:44.581 [2024-12-16 20:10:52.051319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.077559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.077611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.581 [2024-12-16 20:10:52.077624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.185 ms 00:17:44.581 [2024-12-16 20:10:52.077632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.091012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.091208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.581 [2024-12-16 20:10:52.091229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.323 ms 00:17:44.581 [2024-12-16 20:10:52.091238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.103906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.103969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.581 [2024-12-16 20:10:52.103992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.610 ms 00:17:44.581 [2024-12-16 20:10:52.104000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.104442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.104458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.581 [2024-12-16 20:10:52.104468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:44.581 [2024-12-16 20:10:52.104480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.581 [2024-12-16 20:10:52.170783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.581 [2024-12-16 20:10:52.170842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.581 [2024-12-16 20:10:52.170858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.282 ms 00:17:44.581 [2024-12-16 20:10:52.170867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.183129] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:44.582 [2024-12-16 20:10:52.186464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.186508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.582 [2024-12-16 20:10:52.186521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.529 ms 00:17:44.582 [2024-12-16 20:10:52.186530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.186615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.186626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.582 [2024-12-16 20:10:52.186637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.582 [2024-12-16 20:10:52.186645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.186713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.186724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.582 [2024-12-16 20:10:52.186733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:44.582 [2024-12-16 20:10:52.186741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.188150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.188203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:44.582 [2024-12-16 20:10:52.188214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:17:44.582 [2024-12-16 20:10:52.188222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.188263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.188272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.582 [2024-12-16 20:10:52.188281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:44.582 [2024-12-16 20:10:52.188295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.188363] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.582 [2024-12-16 20:10:52.188374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.188383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.582 [2024-12-16 20:10:52.188395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:44.582 [2024-12-16 20:10:52.188404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.215011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.215060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.582 [2024-12-16 20:10:52.215074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.582 ms 00:17:44.582 [2024-12-16 20:10:52.215082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.215167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.582 [2024-12-16 20:10:52.215185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.582 [2024-12-16 20:10:52.215194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:44.582 [2024-12-16 20:10:52.215202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.582 [2024-12-16 20:10:52.216670] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.731 ms, result 0 00:17:45.965  [2024-12-16T20:10:54.549Z] Copying: 29/1024 [MB] (29 MBps) [2024-12-16T20:10:55.489Z] Copying: 52/1024 [MB] (23 MBps) [2024-12-16T20:10:56.432Z] Copying: 63/1024 [MB] (10 MBps) [2024-12-16T20:10:57.375Z] Copying: 77/1024 [MB] (13 MBps) [2024-12-16T20:10:58.317Z] Copying: 122/1024 [MB] (45 MBps) [2024-12-16T20:10:59.261Z] Copying: 169/1024 [MB] (47 MBps) [2024-12-16T20:11:00.647Z] Copying: 222/1024 [MB] (53 MBps) [2024-12-16T20:11:01.277Z] Copying: 264/1024 [MB] (41 MBps) [2024-12-16T20:11:02.663Z] Copying: 317/1024 [MB] (53 MBps) [2024-12-16T20:11:03.235Z] Copying: 371/1024 [MB] (53 MBps) [2024-12-16T20:11:04.622Z] Copying: 391/1024 [MB] (20 MBps) [2024-12-16T20:11:05.564Z] Copying: 401/1024 [MB] (10 MBps) [2024-12-16T20:11:06.508Z] Copying: 412/1024 [MB] (10 MBps) [2024-12-16T20:11:07.451Z] Copying: 428/1024 [MB] (15 MBps) [2024-12-16T20:11:08.394Z] Copying: 449/1024 [MB] (21 MBps) [2024-12-16T20:11:09.338Z] Copying: 465/1024 [MB] (16 MBps) [2024-12-16T20:11:10.283Z] Copying: 482/1024 [MB] (17 MBps) [2024-12-16T20:11:11.229Z] Copying: 495/1024 [MB] (13 MBps) [2024-12-16T20:11:12.618Z] Copying: 510/1024 [MB] (14 MBps) [2024-12-16T20:11:13.561Z] Copying: 526/1024 [MB] (15 MBps) [2024-12-16T20:11:14.504Z] Copying: 541/1024 [MB] (15 MBps) [2024-12-16T20:11:15.451Z] Copying: 557/1024 [MB] (16 MBps) [2024-12-16T20:11:16.393Z] Copying: 571/1024 [MB] (13 MBps) [2024-12-16T20:11:17.337Z] Copying: 584/1024 [MB] (13 MBps) [2024-12-16T20:11:18.280Z] Copying: 601/1024 [MB] (16 MBps) [2024-12-16T20:11:19.666Z] Copying: 614/1024 [MB] (13 MBps) [2024-12-16T20:11:20.238Z] Copying: 625/1024 [MB] (11 MBps) [2024-12-16T20:11:21.625Z] Copying: 637/1024 [MB] (11 MBps) [2024-12-16T20:11:22.569Z] Copying: 651/1024 [MB] (13 MBps) [2024-12-16T20:11:23.513Z] Copying: 665/1024 [MB] (14 MBps) [2024-12-16T20:11:24.456Z] Copying: 676/1024 [MB] (11 MBps) [2024-12-16T20:11:25.401Z] Copying: 687/1024 [MB] (10 MBps) [2024-12-16T20:11:26.344Z] Copying: 714172/1048576 [kB] (10236 kBps) [2024-12-16T20:11:27.288Z] Copying: 717/1024 [MB] (20 MBps) [2024-12-16T20:11:28.340Z] Copying: 730/1024 [MB] (13 MBps) [2024-12-16T20:11:29.285Z] Copying: 745/1024 [MB] (14 MBps) [2024-12-16T20:11:30.230Z] Copying: 759/1024 [MB] (14 MBps) [2024-12-16T20:11:31.617Z] Copying: 779/1024 [MB] (19 MBps) [2024-12-16T20:11:32.561Z] Copying: 798/1024 [MB] (19 MBps) [2024-12-16T20:11:33.507Z] Copying: 810/1024 [MB] (11 MBps) [2024-12-16T20:11:34.452Z] Copying: 825/1024 [MB] (14 MBps) [2024-12-16T20:11:35.398Z] Copying: 838/1024 [MB] (13 MBps) [2024-12-16T20:11:36.342Z] Copying: 856/1024 [MB] (17 MBps) [2024-12-16T20:11:37.287Z] Copying: 874/1024 [MB] (18 MBps) [2024-12-16T20:11:38.231Z] Copying: 894/1024 [MB] (20 MBps) [2024-12-16T20:11:39.619Z] Copying: 908/1024 [MB] (13 MBps) [2024-12-16T20:11:40.563Z] Copying: 921/1024 [MB] (13 MBps) [2024-12-16T20:11:41.505Z] Copying: 935/1024 [MB] (14 MBps) [2024-12-16T20:11:42.448Z] Copying: 955/1024 [MB] (20 MBps) [2024-12-16T20:11:43.391Z] Copying: 976/1024 [MB] (20 MBps) [2024-12-16T20:11:44.336Z] Copying: 988/1024 [MB] (12 MBps) [2024-12-16T20:11:45.284Z] Copying: 1006/1024 [MB] (17 MBps) [2024-12-16T20:11:45.284Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-16 20:11:45.140715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.140776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:37.644 [2024-12-16 20:11:45.140791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:37.644 [2024-12-16 20:11:45.140801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.644 [2024-12-16 20:11:45.140823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:37.644 [2024-12-16 20:11:45.143778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.143981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:37.644 [2024-12-16 20:11:45.144013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:18:37.644 [2024-12-16 20:11:45.144021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.644 [2024-12-16 20:11:45.146590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.146635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:37.644 [2024-12-16 20:11:45.146646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:18:37.644 [2024-12-16 20:11:45.146654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.644 [2024-12-16 20:11:45.167870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.167917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:37.644 [2024-12-16 20:11:45.167929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.198 ms 00:18:37.644 [2024-12-16 20:11:45.167945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.644 [2024-12-16 20:11:45.174072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.174251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:37.644 [2024-12-16 20:11:45.174271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.088 ms 00:18:37.644 [2024-12-16 20:11:45.174279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.644 [2024-12-16 20:11:45.200641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.200817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:37.644 [2024-12-16 20:11:45.200838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.268 ms 00:18:37.644 [2024-12-16 20:11:45.200846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.644 [2024-12-16 20:11:45.216922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.644 [2024-12-16 20:11:45.216970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:37.644 [2024-12-16 20:11:45.216982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:18:37.645 [2024-12-16 20:11:45.217000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.645 [2024-12-16 20:11:45.217158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.645 [2024-12-16 20:11:45.217170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:37.645 [2024-12-16 20:11:45.217179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:37.645 [2024-12-16 20:11:45.217187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.645 [2024-12-16 20:11:45.243418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.645 [2024-12-16 20:11:45.243596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:37.645 [2024-12-16 20:11:45.243616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.216 ms 00:18:37.645 [2024-12-16 20:11:45.243623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.645 [2024-12-16 20:11:45.269230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.645 [2024-12-16 20:11:45.269276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:37.645 [2024-12-16 20:11:45.269287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.533 ms 00:18:37.645 [2024-12-16 20:11:45.269324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.906 [2024-12-16 20:11:45.294155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.906 [2024-12-16 20:11:45.294202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:37.906 [2024-12-16 20:11:45.294212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.788 ms 00:18:37.906 [2024-12-16 20:11:45.294218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.906 [2024-12-16 20:11:45.318681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.906 [2024-12-16 20:11:45.318728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:37.906 [2024-12-16 20:11:45.318740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.361 ms 00:18:37.906 [2024-12-16 20:11:45.318746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.906 [2024-12-16 20:11:45.318786] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:37.906 [2024-12-16 20:11:45.318801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:37.906 [2024-12-16 20:11:45.318998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:37.907 [2024-12-16 20:11:45.319601] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:37.907 [2024-12-16 20:11:45.319616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a332268c-feec-4876-952a-9f7e0b8fbbf4 00:18:37.907 [2024-12-16 20:11:45.319624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:37.907 [2024-12-16 20:11:45.319632] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:37.907 [2024-12-16 20:11:45.319639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:37.907 [2024-12-16 20:11:45.319647] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:37.907 [2024-12-16 20:11:45.319654] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:37.907 [2024-12-16 20:11:45.319662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:37.907 [2024-12-16 20:11:45.319670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:37.907 [2024-12-16 20:11:45.319676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:37.907 [2024-12-16 20:11:45.319690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:37.907 [2024-12-16 20:11:45.319699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.907 [2024-12-16 20:11:45.319707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:37.907 [2024-12-16 20:11:45.319732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:18:37.907 [2024-12-16 20:11:45.319743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.907 [2024-12-16 20:11:45.332994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.907 [2024-12-16 20:11:45.333034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:37.907 [2024-12-16 20:11:45.333046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.217 ms 00:18:37.907 [2024-12-16 20:11:45.333054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.907 [2024-12-16 20:11:45.333279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.907 [2024-12-16 20:11:45.333288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:37.907 [2024-12-16 20:11:45.333325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:18:37.907 [2024-12-16 20:11:45.333332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.907 [2024-12-16 20:11:45.372286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.907 [2024-12-16 20:11:45.372347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.907 [2024-12-16 20:11:45.372358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.907 [2024-12-16 20:11:45.372367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.372447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.372457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.908 [2024-12-16 20:11:45.372472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.372479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.372551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.372561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.908 [2024-12-16 20:11:45.372569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.372576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.372592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.372600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.908 [2024-12-16 20:11:45.372608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.372619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.452275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.452525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.908 [2024-12-16 20:11:45.452549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.452557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.484574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.484620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.908 [2024-12-16 20:11:45.484632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.484646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.484715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.484726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:37.908 [2024-12-16 20:11:45.484735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.484743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.484785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.484794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:37.908 [2024-12-16 20:11:45.484802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.484810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.484916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.484927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:37.908 [2024-12-16 20:11:45.484936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.484944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.484976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.484985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:37.908 [2024-12-16 20:11:45.484994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.485002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.485047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.485057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:37.908 [2024-12-16 20:11:45.485066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.485073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.485121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.908 [2024-12-16 20:11:45.485132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:37.908 [2024-12-16 20:11:45.485140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.908 [2024-12-16 20:11:45.485149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.908 [2024-12-16 20:11:45.485282] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.532 ms, result 0 00:18:39.292 00:18:39.292 00:18:39.292 20:11:46 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:39.292 [2024-12-16 20:11:46.728872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.292 [2024-12-16 20:11:46.729005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73603 ] 00:18:39.292 [2024-12-16 20:11:46.880278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.553 [2024-12-16 20:11:47.097287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.814 [2024-12-16 20:11:47.381618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.815 [2024-12-16 20:11:47.381698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.078 [2024-12-16 20:11:47.536704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.536762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.078 [2024-12-16 20:11:47.536778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:40.078 [2024-12-16 20:11:47.536789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.536843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.536854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.078 [2024-12-16 20:11:47.536862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:40.078 [2024-12-16 20:11:47.536870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.536890] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.078 [2024-12-16 20:11:47.537782] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.078 [2024-12-16 20:11:47.537833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.537842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.078 [2024-12-16 20:11:47.537851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:18:40.078 [2024-12-16 20:11:47.537858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.539535] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.078 [2024-12-16 20:11:47.553977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.554023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.078 [2024-12-16 20:11:47.554036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.444 ms 00:18:40.078 [2024-12-16 20:11:47.554044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.554117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.554127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.078 [2024-12-16 20:11:47.554136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:40.078 [2024-12-16 20:11:47.554144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.562140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.562366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.078 [2024-12-16 20:11:47.562385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.921 ms 00:18:40.078 [2024-12-16 20:11:47.562393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.562489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.562499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.078 [2024-12-16 20:11:47.562509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:40.078 [2024-12-16 20:11:47.562517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.562562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.562571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.078 [2024-12-16 20:11:47.562580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:40.078 [2024-12-16 20:11:47.562587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.562618] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.078 [2024-12-16 20:11:47.566724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.566759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.078 [2024-12-16 20:11:47.566770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.120 ms 00:18:40.078 [2024-12-16 20:11:47.566777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.566815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.566824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.078 [2024-12-16 20:11:47.566832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:40.078 [2024-12-16 20:11:47.566843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.566892] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.078 [2024-12-16 20:11:47.566915] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:40.078 [2024-12-16 20:11:47.566950] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.078 [2024-12-16 20:11:47.566965] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:40.078 [2024-12-16 20:11:47.567040] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:40.078 [2024-12-16 20:11:47.567051] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.078 [2024-12-16 20:11:47.567064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:40.078 [2024-12-16 20:11:47.567074] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.078 [2024-12-16 20:11:47.567083] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.078 [2024-12-16 20:11:47.567091] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.078 [2024-12-16 20:11:47.567099] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.078 [2024-12-16 20:11:47.567107] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:40.078 [2024-12-16 20:11:47.567114] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:40.078 [2024-12-16 20:11:47.567122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.567130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.078 [2024-12-16 20:11:47.567138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:18:40.078 [2024-12-16 20:11:47.567146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.567209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.078 [2024-12-16 20:11:47.567217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.078 [2024-12-16 20:11:47.567224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:40.078 [2024-12-16 20:11:47.567231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.078 [2024-12-16 20:11:47.567323] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.078 [2024-12-16 20:11:47.567335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.078 [2024-12-16 20:11:47.567344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.078 [2024-12-16 20:11:47.567352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.078 [2024-12-16 20:11:47.567360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.078 [2024-12-16 20:11:47.567366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.078 [2024-12-16 20:11:47.567373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.078 [2024-12-16 20:11:47.567382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.078 [2024-12-16 20:11:47.567389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.078 [2024-12-16 20:11:47.567395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.078 [2024-12-16 20:11:47.567402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.079 [2024-12-16 20:11:47.567409] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.079 [2024-12-16 20:11:47.567417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.079 [2024-12-16 20:11:47.567424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.079 [2024-12-16 20:11:47.567430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:40.079 [2024-12-16 20:11:47.567437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.079 [2024-12-16 20:11:47.567458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:40.079 [2024-12-16 20:11:47.567464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:40.079 [2024-12-16 20:11:47.567478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:40.079 [2024-12-16 20:11:47.567485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.079 [2024-12-16 20:11:47.567498] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.079 [2024-12-16 20:11:47.567520] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.079 [2024-12-16 20:11:47.567540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.079 [2024-12-16 20:11:47.567559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.079 [2024-12-16 20:11:47.567579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.079 [2024-12-16 20:11:47.567592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.079 [2024-12-16 20:11:47.567598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:40.079 [2024-12-16 20:11:47.567604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.079 [2024-12-16 20:11:47.567610] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.079 [2024-12-16 20:11:47.567620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.079 [2024-12-16 20:11:47.567628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.079 [2024-12-16 20:11:47.567645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.079 [2024-12-16 20:11:47.567651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.079 [2024-12-16 20:11:47.567659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.079 [2024-12-16 20:11:47.567666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.079 [2024-12-16 20:11:47.567672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.079 [2024-12-16 20:11:47.567680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.079 [2024-12-16 20:11:47.567688] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.079 [2024-12-16 20:11:47.567698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.079 [2024-12-16 20:11:47.567706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.079 [2024-12-16 20:11:47.567713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:40.079 [2024-12-16 20:11:47.567745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:40.079 [2024-12-16 20:11:47.567753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:40.079 [2024-12-16 20:11:47.567761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:40.079 [2024-12-16 20:11:47.567769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:40.079 [2024-12-16 20:11:47.567777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:40.079 [2024-12-16 20:11:47.567784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:40.079 [2024-12-16 20:11:47.567791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:40.079 [2024-12-16 20:11:47.567799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:40.079 [2024-12-16 20:11:47.567806] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:40.079 [2024-12-16 20:11:47.567814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:40.079 [2024-12-16 20:11:47.567822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:40.079 [2024-12-16 20:11:47.567830] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.079 [2024-12-16 20:11:47.567838] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.079 [2024-12-16 20:11:47.567848] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.079 [2024-12-16 20:11:47.567855] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.079 [2024-12-16 20:11:47.567862] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.079 [2024-12-16 20:11:47.567872] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.079 [2024-12-16 20:11:47.567880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.567887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.079 [2024-12-16 20:11:47.567895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:18:40.079 [2024-12-16 20:11:47.567902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.585959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.586008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.079 [2024-12-16 20:11:47.586020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.016 ms 00:18:40.079 [2024-12-16 20:11:47.586034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.586125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.586134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.079 [2024-12-16 20:11:47.586144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:40.079 [2024-12-16 20:11:47.586152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.630802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.630856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.079 [2024-12-16 20:11:47.630869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.596 ms 00:18:40.079 [2024-12-16 20:11:47.630877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.630926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.630936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.079 [2024-12-16 20:11:47.630945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.079 [2024-12-16 20:11:47.630952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.631520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.631552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.079 [2024-12-16 20:11:47.631562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:18:40.079 [2024-12-16 20:11:47.631576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.631703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.631733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.079 [2024-12-16 20:11:47.631743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:40.079 [2024-12-16 20:11:47.631750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.648090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.648135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.079 [2024-12-16 20:11:47.648146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.316 ms 00:18:40.079 [2024-12-16 20:11:47.648154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.662520] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:40.079 [2024-12-16 20:11:47.662569] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.079 [2024-12-16 20:11:47.662582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.662590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.079 [2024-12-16 20:11:47.662600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.321 ms 00:18:40.079 [2024-12-16 20:11:47.662607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.079 [2024-12-16 20:11:47.688309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.079 [2024-12-16 20:11:47.688356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.079 [2024-12-16 20:11:47.688368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.643 ms 00:18:40.079 [2024-12-16 20:11:47.688377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.080 [2024-12-16 20:11:47.701292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.080 [2024-12-16 20:11:47.701344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.080 [2024-12-16 20:11:47.701355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.861 ms 00:18:40.080 [2024-12-16 20:11:47.701362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.080 [2024-12-16 20:11:47.713841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.080 [2024-12-16 20:11:47.713893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.080 [2024-12-16 20:11:47.713905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.434 ms 00:18:40.080 [2024-12-16 20:11:47.713911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.080 [2024-12-16 20:11:47.714323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.080 [2024-12-16 20:11:47.714338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.080 [2024-12-16 20:11:47.714349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:40.080 [2024-12-16 20:11:47.714357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.780440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.780500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.341 [2024-12-16 20:11:47.780516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.065 ms 00:18:40.341 [2024-12-16 20:11:47.780524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.792551] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:40.341 [2024-12-16 20:11:47.795771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.795813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.341 [2024-12-16 20:11:47.795825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.189 ms 00:18:40.341 [2024-12-16 20:11:47.795840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.795921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.795933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.341 [2024-12-16 20:11:47.795942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.341 [2024-12-16 20:11:47.795950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.796017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.796028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.341 [2024-12-16 20:11:47.796037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:40.341 [2024-12-16 20:11:47.796045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.797420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.797462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:40.341 [2024-12-16 20:11:47.797473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:18:40.341 [2024-12-16 20:11:47.797480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.797516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.797525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.341 [2024-12-16 20:11:47.797539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.341 [2024-12-16 20:11:47.797547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.797582] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.341 [2024-12-16 20:11:47.797592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.797603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.341 [2024-12-16 20:11:47.797611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:40.341 [2024-12-16 20:11:47.797618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.823442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.823487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.341 [2024-12-16 20:11:47.823500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.804 ms 00:18:40.341 [2024-12-16 20:11:47.823509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.823594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-12-16 20:11:47.823603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.341 [2024-12-16 20:11:47.823612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:40.341 [2024-12-16 20:11:47.823621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-12-16 20:11:47.824836] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.674 ms, result 0 00:18:41.730  [2024-12-16T20:11:50.314Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-16T20:11:51.260Z] Copying: 47/1024 [MB] (29 MBps) [2024-12-16T20:11:52.202Z] Copying: 65/1024 [MB] (17 MBps) [2024-12-16T20:11:53.157Z] Copying: 82/1024 [MB] (16 MBps) [2024-12-16T20:11:54.139Z] Copying: 102/1024 [MB] (20 MBps) [2024-12-16T20:11:55.084Z] Copying: 124/1024 [MB] (21 MBps) [2024-12-16T20:11:56.026Z] Copying: 134/1024 [MB] (10 MBps) [2024-12-16T20:11:57.413Z] Copying: 158/1024 [MB] (23 MBps) [2024-12-16T20:11:58.358Z] Copying: 178/1024 [MB] (20 MBps) [2024-12-16T20:11:59.304Z] Copying: 195/1024 [MB] (16 MBps) [2024-12-16T20:12:00.246Z] Copying: 213/1024 [MB] (17 MBps) [2024-12-16T20:12:01.189Z] Copying: 238/1024 [MB] (25 MBps) [2024-12-16T20:12:02.131Z] Copying: 269/1024 [MB] (30 MBps) [2024-12-16T20:12:03.074Z] Copying: 292/1024 [MB] (23 MBps) [2024-12-16T20:12:04.018Z] Copying: 317/1024 [MB] (24 MBps) [2024-12-16T20:12:05.402Z] Copying: 339/1024 [MB] (22 MBps) [2024-12-16T20:12:06.345Z] Copying: 361/1024 [MB] (21 MBps) [2024-12-16T20:12:07.288Z] Copying: 378/1024 [MB] (17 MBps) [2024-12-16T20:12:08.232Z] Copying: 409/1024 [MB] (30 MBps) [2024-12-16T20:12:09.176Z] Copying: 435/1024 [MB] (26 MBps) [2024-12-16T20:12:10.118Z] Copying: 453/1024 [MB] (18 MBps) [2024-12-16T20:12:11.061Z] Copying: 471/1024 [MB] (17 MBps) [2024-12-16T20:12:12.005Z] Copying: 485/1024 [MB] (13 MBps) [2024-12-16T20:12:13.393Z] Copying: 497/1024 [MB] (12 MBps) [2024-12-16T20:12:14.338Z] Copying: 515/1024 [MB] (17 MBps) [2024-12-16T20:12:15.281Z] Copying: 533/1024 [MB] (17 MBps) [2024-12-16T20:12:16.225Z] Copying: 544/1024 [MB] (10 MBps) [2024-12-16T20:12:17.168Z] Copying: 554/1024 [MB] (10 MBps) [2024-12-16T20:12:18.113Z] Copying: 565/1024 [MB] (10 MBps) [2024-12-16T20:12:19.116Z] Copying: 587/1024 [MB] (22 MBps) [2024-12-16T20:12:20.059Z] Copying: 599/1024 [MB] (11 MBps) [2024-12-16T20:12:21.445Z] Copying: 609/1024 [MB] (10 MBps) [2024-12-16T20:12:22.019Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-16T20:12:23.405Z] Copying: 631/1024 [MB] (10 MBps) [2024-12-16T20:12:24.348Z] Copying: 651/1024 [MB] (20 MBps) [2024-12-16T20:12:25.295Z] Copying: 664/1024 [MB] (12 MBps) [2024-12-16T20:12:26.242Z] Copying: 681/1024 [MB] (16 MBps) [2024-12-16T20:12:27.186Z] Copying: 702/1024 [MB] (21 MBps) [2024-12-16T20:12:28.129Z] Copying: 725/1024 [MB] (22 MBps) [2024-12-16T20:12:29.072Z] Copying: 744/1024 [MB] (19 MBps) [2024-12-16T20:12:30.015Z] Copying: 763/1024 [MB] (18 MBps) [2024-12-16T20:12:31.401Z] Copying: 786/1024 [MB] (22 MBps) [2024-12-16T20:12:32.344Z] Copying: 803/1024 [MB] (17 MBps) [2024-12-16T20:12:33.288Z] Copying: 827/1024 [MB] (24 MBps) [2024-12-16T20:12:34.234Z] Copying: 842/1024 [MB] (14 MBps) [2024-12-16T20:12:35.179Z] Copying: 857/1024 [MB] (15 MBps) [2024-12-16T20:12:36.124Z] Copying: 874/1024 [MB] (17 MBps) [2024-12-16T20:12:37.068Z] Copying: 886/1024 [MB] (11 MBps) [2024-12-16T20:12:38.013Z] Copying: 899/1024 [MB] (13 MBps) [2024-12-16T20:12:39.399Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-16T20:12:40.343Z] Copying: 923/1024 [MB] (13 MBps) [2024-12-16T20:12:41.287Z] Copying: 954/1024 [MB] (31 MBps) [2024-12-16T20:12:42.231Z] Copying: 969/1024 [MB] (14 MBps) [2024-12-16T20:12:43.174Z] Copying: 991/1024 [MB] (21 MBps) [2024-12-16T20:12:43.746Z] Copying: 1013/1024 [MB] (22 MBps) [2024-12-16T20:12:44.007Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-16 20:12:43.848101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.848201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.367 [2024-12-16 20:12:43.848217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.367 [2024-12-16 20:12:43.848226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.848251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.367 [2024-12-16 20:12:43.851194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.851244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.367 [2024-12-16 20:12:43.851255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:19:36.367 [2024-12-16 20:12:43.851263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.851519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.851530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.367 [2024-12-16 20:12:43.851541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:19:36.367 [2024-12-16 20:12:43.851548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.856160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.856189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.367 [2024-12-16 20:12:43.856205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:19:36.367 [2024-12-16 20:12:43.856212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.862900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.862942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:36.367 [2024-12-16 20:12:43.862952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.667 ms 00:19:36.367 [2024-12-16 20:12:43.862960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.890668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.890719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.367 [2024-12-16 20:12:43.890732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.618 ms 00:19:36.367 [2024-12-16 20:12:43.890740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.906878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.906924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.367 [2024-12-16 20:12:43.906937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.088 ms 00:19:36.367 [2024-12-16 20:12:43.906952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.907114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.907127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.367 [2024-12-16 20:12:43.907137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:36.367 [2024-12-16 20:12:43.907145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.933054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.933100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:36.367 [2024-12-16 20:12:43.933113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.894 ms 00:19:36.367 [2024-12-16 20:12:43.933120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.958497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.958542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:36.367 [2024-12-16 20:12:43.958568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:19:36.367 [2024-12-16 20:12:43.958575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.367 [2024-12-16 20:12:43.983008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.367 [2024-12-16 20:12:43.983053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.367 [2024-12-16 20:12:43.983065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.389 ms 00:19:36.367 [2024-12-16 20:12:43.983072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.630 [2024-12-16 20:12:44.007786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.630 [2024-12-16 20:12:44.007832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.630 [2024-12-16 20:12:44.007844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.627 ms 00:19:36.630 [2024-12-16 20:12:44.007862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.630 [2024-12-16 20:12:44.007905] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.630 [2024-12-16 20:12:44.007928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.007996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.630 [2024-12-16 20:12:44.008569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.631 [2024-12-16 20:12:44.008791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.631 [2024-12-16 20:12:44.008800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a332268c-feec-4876-952a-9f7e0b8fbbf4 00:19:36.631 [2024-12-16 20:12:44.008808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.631 [2024-12-16 20:12:44.008815] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.631 [2024-12-16 20:12:44.008823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.631 [2024-12-16 20:12:44.008831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.631 [2024-12-16 20:12:44.008838] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.631 [2024-12-16 20:12:44.008847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.631 [2024-12-16 20:12:44.008854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.631 [2024-12-16 20:12:44.008868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.631 [2024-12-16 20:12:44.008875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.631 [2024-12-16 20:12:44.008883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.631 [2024-12-16 20:12:44.008891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.631 [2024-12-16 20:12:44.008905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:19:36.631 [2024-12-16 20:12:44.008913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.022496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.631 [2024-12-16 20:12:44.022536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.631 [2024-12-16 20:12:44.022547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.547 ms 00:19:36.631 [2024-12-16 20:12:44.022557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.022787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.631 [2024-12-16 20:12:44.022805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.631 [2024-12-16 20:12:44.022814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:19:36.631 [2024-12-16 20:12:44.022821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.061410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.061458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.631 [2024-12-16 20:12:44.061470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.061478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.061540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.061555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.631 [2024-12-16 20:12:44.061564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.061571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.061642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.061652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.631 [2024-12-16 20:12:44.061660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.061668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.061684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.061692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.631 [2024-12-16 20:12:44.061704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.061712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.141015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.141065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.631 [2024-12-16 20:12:44.141076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.141085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.631 [2024-12-16 20:12:44.173460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.631 [2024-12-16 20:12:44.173552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.631 [2024-12-16 20:12:44.173619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.631 [2024-12-16 20:12:44.173750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.631 [2024-12-16 20:12:44.173806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.631 [2024-12-16 20:12:44.173873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.173926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.631 [2024-12-16 20:12:44.173936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.631 [2024-12-16 20:12:44.173945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.631 [2024-12-16 20:12:44.173956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.631 [2024-12-16 20:12:44.174084] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 325.962 ms, result 0 00:19:37.628 00:19:37.628 00:19:37.628 20:12:45 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:40.172 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:40.172 20:12:47 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:40.172 [2024-12-16 20:12:47.323790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.172 [2024-12-16 20:12:47.324112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:19:40.172 [2024-12-16 20:12:47.468564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.172 [2024-12-16 20:12:47.695201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.433 [2024-12-16 20:12:47.979329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.433 [2024-12-16 20:12:47.979408] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.696 [2024-12-16 20:12:48.134500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.134559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.696 [2024-12-16 20:12:48.134574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.696 [2024-12-16 20:12:48.134585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.134639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.134649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.696 [2024-12-16 20:12:48.134658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:40.696 [2024-12-16 20:12:48.134666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.134686] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.696 [2024-12-16 20:12:48.135467] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.696 [2024-12-16 20:12:48.135486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.135494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.696 [2024-12-16 20:12:48.135504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:19:40.696 [2024-12-16 20:12:48.135511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.137259] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.696 [2024-12-16 20:12:48.151357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.151404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.696 [2024-12-16 20:12:48.151417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.101 ms 00:19:40.696 [2024-12-16 20:12:48.151425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.151498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.151508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.696 [2024-12-16 20:12:48.151517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:40.696 [2024-12-16 20:12:48.151524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.159614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.159654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.696 [2024-12-16 20:12:48.159665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.013 ms 00:19:40.696 [2024-12-16 20:12:48.159672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.159768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.159778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.696 [2024-12-16 20:12:48.159787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:40.696 [2024-12-16 20:12:48.159798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.159839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.159848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.696 [2024-12-16 20:12:48.159856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:40.696 [2024-12-16 20:12:48.159890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.159920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.696 [2024-12-16 20:12:48.164224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.164430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.696 [2024-12-16 20:12:48.164450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:19:40.696 [2024-12-16 20:12:48.164458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.164507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.696 [2024-12-16 20:12:48.164515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.696 [2024-12-16 20:12:48.164527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:40.696 [2024-12-16 20:12:48.164534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.696 [2024-12-16 20:12:48.164584] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.697 [2024-12-16 20:12:48.164606] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:40.697 [2024-12-16 20:12:48.164641] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.697 [2024-12-16 20:12:48.164656] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:40.697 [2024-12-16 20:12:48.164733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:40.697 [2024-12-16 20:12:48.164746] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.697 [2024-12-16 20:12:48.164756] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:40.697 [2024-12-16 20:12:48.164767] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.697 [2024-12-16 20:12:48.164776] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.697 [2024-12-16 20:12:48.164784] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:40.697 [2024-12-16 20:12:48.164792] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.697 [2024-12-16 20:12:48.164799] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:40.697 [2024-12-16 20:12:48.164808] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:40.697 [2024-12-16 20:12:48.164815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.697 [2024-12-16 20:12:48.164823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.697 [2024-12-16 20:12:48.164831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:19:40.697 [2024-12-16 20:12:48.164840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.697 [2024-12-16 20:12:48.164900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.697 [2024-12-16 20:12:48.164908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.697 [2024-12-16 20:12:48.164916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:40.697 [2024-12-16 20:12:48.164923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.697 [2024-12-16 20:12:48.164993] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.697 [2024-12-16 20:12:48.165002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.697 [2024-12-16 20:12:48.165012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.697 [2024-12-16 20:12:48.165038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.697 [2024-12-16 20:12:48.165061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.697 [2024-12-16 20:12:48.165074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.697 [2024-12-16 20:12:48.165084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:40.697 [2024-12-16 20:12:48.165092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.697 [2024-12-16 20:12:48.165099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.697 [2024-12-16 20:12:48.165106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:40.697 [2024-12-16 20:12:48.165112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.697 [2024-12-16 20:12:48.165134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:40.697 [2024-12-16 20:12:48.165141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:40.697 [2024-12-16 20:12:48.165154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:40.697 [2024-12-16 20:12:48.165161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.697 [2024-12-16 20:12:48.165174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.697 [2024-12-16 20:12:48.165195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165209] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.697 [2024-12-16 20:12:48.165215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.697 [2024-12-16 20:12:48.165235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.697 [2024-12-16 20:12:48.165256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.697 [2024-12-16 20:12:48.165268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.697 [2024-12-16 20:12:48.165275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:40.697 [2024-12-16 20:12:48.165281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.697 [2024-12-16 20:12:48.165288] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.697 [2024-12-16 20:12:48.165311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.697 [2024-12-16 20:12:48.165320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.697 [2024-12-16 20:12:48.165338] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.697 [2024-12-16 20:12:48.165346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.697 [2024-12-16 20:12:48.165353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.697 [2024-12-16 20:12:48.165361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.697 [2024-12-16 20:12:48.165368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.697 [2024-12-16 20:12:48.165375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.697 [2024-12-16 20:12:48.165383] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.697 [2024-12-16 20:12:48.165393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.697 [2024-12-16 20:12:48.165402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:40.697 [2024-12-16 20:12:48.165410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:40.697 [2024-12-16 20:12:48.165417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:40.697 [2024-12-16 20:12:48.165424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:40.697 [2024-12-16 20:12:48.165432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:40.697 [2024-12-16 20:12:48.165439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:40.697 [2024-12-16 20:12:48.165447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:40.697 [2024-12-16 20:12:48.165453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:40.697 [2024-12-16 20:12:48.165461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:40.697 [2024-12-16 20:12:48.165467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:40.697 [2024-12-16 20:12:48.165474] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:40.697 [2024-12-16 20:12:48.165483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:40.697 [2024-12-16 20:12:48.165491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:40.697 [2024-12-16 20:12:48.165498] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.697 [2024-12-16 20:12:48.165506] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.697 [2024-12-16 20:12:48.165514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.697 [2024-12-16 20:12:48.165522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.697 [2024-12-16 20:12:48.165529] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.697 [2024-12-16 20:12:48.165536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.697 [2024-12-16 20:12:48.165544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.697 [2024-12-16 20:12:48.165551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.697 [2024-12-16 20:12:48.165558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:19:40.697 [2024-12-16 20:12:48.165568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.697 [2024-12-16 20:12:48.183564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.697 [2024-12-16 20:12:48.183754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.697 [2024-12-16 20:12:48.183774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.956 ms 00:19:40.697 [2024-12-16 20:12:48.183790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.697 [2024-12-16 20:12:48.183908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.697 [2024-12-16 20:12:48.183917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.697 [2024-12-16 20:12:48.183925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:40.698 [2024-12-16 20:12:48.183933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.227516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.227570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.698 [2024-12-16 20:12:48.227583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.529 ms 00:19:40.698 [2024-12-16 20:12:48.227592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.227640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.227650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.698 [2024-12-16 20:12:48.227659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:40.698 [2024-12-16 20:12:48.227667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.228217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.228248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.698 [2024-12-16 20:12:48.228265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:19:40.698 [2024-12-16 20:12:48.228273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.228418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.228478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.698 [2024-12-16 20:12:48.228490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:40.698 [2024-12-16 20:12:48.228498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.244917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.244959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.698 [2024-12-16 20:12:48.244970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.395 ms 00:19:40.698 [2024-12-16 20:12:48.244979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.259503] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:40.698 [2024-12-16 20:12:48.259699] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:40.698 [2024-12-16 20:12:48.259719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.259727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:40.698 [2024-12-16 20:12:48.259738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.630 ms 00:19:40.698 [2024-12-16 20:12:48.259746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.285668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.285733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:40.698 [2024-12-16 20:12:48.285747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.811 ms 00:19:40.698 [2024-12-16 20:12:48.285755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.298875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.299052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:40.698 [2024-12-16 20:12:48.299073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:19:40.698 [2024-12-16 20:12:48.299080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.312247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.312327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:40.698 [2024-12-16 20:12:48.312343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.837 ms 00:19:40.698 [2024-12-16 20:12:48.312351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.698 [2024-12-16 20:12:48.312744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.698 [2024-12-16 20:12:48.312759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.698 [2024-12-16 20:12:48.312770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:40.698 [2024-12-16 20:12:48.312778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.377943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.378174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.960 [2024-12-16 20:12:48.378200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.146 ms 00:19:40.960 [2024-12-16 20:12:48.378209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.389856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:40.960 [2024-12-16 20:12:48.392908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.393073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.960 [2024-12-16 20:12:48.393101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.653 ms 00:19:40.960 [2024-12-16 20:12:48.393109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.393181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.393192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.960 [2024-12-16 20:12:48.393201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.960 [2024-12-16 20:12:48.393210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.393278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.393288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.960 [2024-12-16 20:12:48.393320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:40.960 [2024-12-16 20:12:48.393332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.394702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.394741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:40.960 [2024-12-16 20:12:48.394752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:19:40.960 [2024-12-16 20:12:48.394760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.394793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.394807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.960 [2024-12-16 20:12:48.394816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.960 [2024-12-16 20:12:48.394824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.394860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.960 [2024-12-16 20:12:48.394872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.394880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.960 [2024-12-16 20:12:48.394888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:40.960 [2024-12-16 20:12:48.394896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.420856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.420904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.960 [2024-12-16 20:12:48.420918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.940 ms 00:19:40.960 [2024-12-16 20:12:48.420934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.421014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.960 [2024-12-16 20:12:48.421024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.960 [2024-12-16 20:12:48.421033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:40.960 [2024-12-16 20:12:48.421042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.960 [2024-12-16 20:12:48.422244] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.264 ms, result 0 00:19:41.903  [2024-12-16T20:12:50.486Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-16T20:12:51.872Z] Copying: 52/1024 [MB] (34 MBps) [2024-12-16T20:12:52.444Z] Copying: 105/1024 [MB] (52 MBps) [2024-12-16T20:12:53.829Z] Copying: 131/1024 [MB] (26 MBps) [2024-12-16T20:12:54.774Z] Copying: 148/1024 [MB] (16 MBps) [2024-12-16T20:12:55.715Z] Copying: 160/1024 [MB] (12 MBps) [2024-12-16T20:12:56.658Z] Copying: 181/1024 [MB] (20 MBps) [2024-12-16T20:12:57.601Z] Copying: 215/1024 [MB] (33 MBps) [2024-12-16T20:12:58.545Z] Copying: 246/1024 [MB] (31 MBps) [2024-12-16T20:12:59.487Z] Copying: 266/1024 [MB] (19 MBps) [2024-12-16T20:13:00.873Z] Copying: 287/1024 [MB] (20 MBps) [2024-12-16T20:13:01.445Z] Copying: 305/1024 [MB] (18 MBps) [2024-12-16T20:13:02.832Z] Copying: 323/1024 [MB] (17 MBps) [2024-12-16T20:13:03.775Z] Copying: 344/1024 [MB] (21 MBps) [2024-12-16T20:13:04.717Z] Copying: 360/1024 [MB] (15 MBps) [2024-12-16T20:13:05.659Z] Copying: 381/1024 [MB] (21 MBps) [2024-12-16T20:13:06.602Z] Copying: 398/1024 [MB] (16 MBps) [2024-12-16T20:13:07.544Z] Copying: 413/1024 [MB] (15 MBps) [2024-12-16T20:13:08.488Z] Copying: 427/1024 [MB] (13 MBps) [2024-12-16T20:13:09.875Z] Copying: 445/1024 [MB] (18 MBps) [2024-12-16T20:13:10.448Z] Copying: 459/1024 [MB] (13 MBps) [2024-12-16T20:13:11.484Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-16T20:13:12.870Z] Copying: 489/1024 [MB] (19 MBps) [2024-12-16T20:13:13.441Z] Copying: 508/1024 [MB] (19 MBps) [2024-12-16T20:13:14.828Z] Copying: 528/1024 [MB] (19 MBps) [2024-12-16T20:13:15.769Z] Copying: 547/1024 [MB] (18 MBps) [2024-12-16T20:13:16.710Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-16T20:13:17.654Z] Copying: 572/1024 [MB] (14 MBps) [2024-12-16T20:13:18.597Z] Copying: 590/1024 [MB] (17 MBps) [2024-12-16T20:13:19.541Z] Copying: 600/1024 [MB] (10 MBps) [2024-12-16T20:13:20.493Z] Copying: 610/1024 [MB] (10 MBps) [2024-12-16T20:13:21.440Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-16T20:13:22.827Z] Copying: 631/1024 [MB] (10 MBps) [2024-12-16T20:13:23.770Z] Copying: 642/1024 [MB] (10 MBps) [2024-12-16T20:13:24.713Z] Copying: 652/1024 [MB] (10 MBps) [2024-12-16T20:13:25.652Z] Copying: 662/1024 [MB] (10 MBps) [2024-12-16T20:13:26.595Z] Copying: 672/1024 [MB] (10 MBps) [2024-12-16T20:13:27.538Z] Copying: 683/1024 [MB] (10 MBps) [2024-12-16T20:13:28.482Z] Copying: 693/1024 [MB] (10 MBps) [2024-12-16T20:13:29.867Z] Copying: 707/1024 [MB] (14 MBps) [2024-12-16T20:13:30.438Z] Copying: 759/1024 [MB] (52 MBps) [2024-12-16T20:13:31.824Z] Copying: 812/1024 [MB] (52 MBps) [2024-12-16T20:13:32.768Z] Copying: 833/1024 [MB] (20 MBps) [2024-12-16T20:13:33.711Z] Copying: 848/1024 [MB] (14 MBps) [2024-12-16T20:13:34.655Z] Copying: 860/1024 [MB] (12 MBps) [2024-12-16T20:13:35.598Z] Copying: 876/1024 [MB] (16 MBps) [2024-12-16T20:13:36.541Z] Copying: 892/1024 [MB] (15 MBps) [2024-12-16T20:13:37.524Z] Copying: 911/1024 [MB] (19 MBps) [2024-12-16T20:13:38.468Z] Copying: 933/1024 [MB] (21 MBps) [2024-12-16T20:13:39.855Z] Copying: 956/1024 [MB] (23 MBps) [2024-12-16T20:13:40.800Z] Copying: 972/1024 [MB] (15 MBps) [2024-12-16T20:13:41.743Z] Copying: 991/1024 [MB] (18 MBps) [2024-12-16T20:13:42.315Z] Copying: 1023/1024 [MB] (31 MBps) [2024-12-16T20:13:42.315Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-16 20:13:42.161803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.161884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:34.675 [2024-12-16 20:13:42.161901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:34.675 [2024-12-16 20:13:42.161909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.675 [2024-12-16 20:13:42.163914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.675 [2024-12-16 20:13:42.168412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.168458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:34.675 [2024-12-16 20:13:42.168472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:20:34.675 [2024-12-16 20:13:42.168480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.675 [2024-12-16 20:13:42.182295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.182359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:34.675 [2024-12-16 20:13:42.182371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.900 ms 00:20:34.675 [2024-12-16 20:13:42.182379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.675 [2024-12-16 20:13:42.205165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.205213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:34.675 [2024-12-16 20:13:42.205226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.768 ms 00:20:34.675 [2024-12-16 20:13:42.205236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.675 [2024-12-16 20:13:42.211373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.211417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:34.675 [2024-12-16 20:13:42.211438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:20:34.675 [2024-12-16 20:13:42.211447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.675 [2024-12-16 20:13:42.238197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.238245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:34.675 [2024-12-16 20:13:42.238258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.694 ms 00:20:34.675 [2024-12-16 20:13:42.238266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.675 [2024-12-16 20:13:42.254413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.675 [2024-12-16 20:13:42.254459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:34.675 [2024-12-16 20:13:42.254473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.081 ms 00:20:34.675 [2024-12-16 20:13:42.254481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.937 [2024-12-16 20:13:42.409224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.937 [2024-12-16 20:13:42.409445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:34.937 [2024-12-16 20:13:42.409469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 154.689 ms 00:20:34.937 [2024-12-16 20:13:42.409485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.937 [2024-12-16 20:13:42.435239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.937 [2024-12-16 20:13:42.435286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:34.937 [2024-12-16 20:13:42.435314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.732 ms 00:20:34.937 [2024-12-16 20:13:42.435322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.937 [2024-12-16 20:13:42.463454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.937 [2024-12-16 20:13:42.463519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:34.937 [2024-12-16 20:13:42.463545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.084 ms 00:20:34.937 [2024-12-16 20:13:42.463554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.937 [2024-12-16 20:13:42.488401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.937 [2024-12-16 20:13:42.488585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:34.937 [2024-12-16 20:13:42.488607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.794 ms 00:20:34.937 [2024-12-16 20:13:42.488614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.937 [2024-12-16 20:13:42.513837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.937 [2024-12-16 20:13:42.513883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:34.937 [2024-12-16 20:13:42.513896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.099 ms 00:20:34.937 [2024-12-16 20:13:42.513903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.937 [2024-12-16 20:13:42.513946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:34.937 [2024-12-16 20:13:42.513962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97280 / 261120 wr_cnt: 1 state: open 00:20:34.937 [2024-12-16 20:13:42.513973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.513981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.513990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.513998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:34.937 [2024-12-16 20:13:42.514108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:34.938 [2024-12-16 20:13:42.514796] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:34.938 [2024-12-16 20:13:42.514804] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a332268c-feec-4876-952a-9f7e0b8fbbf4 00:20:34.938 [2024-12-16 20:13:42.514814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97280 00:20:34.938 [2024-12-16 20:13:42.514822] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 98240 00:20:34.938 [2024-12-16 20:13:42.514832] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97280 00:20:34.938 [2024-12-16 20:13:42.514841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:20:34.938 [2024-12-16 20:13:42.514848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:34.938 [2024-12-16 20:13:42.514863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:34.938 [2024-12-16 20:13:42.514871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:34.938 [2024-12-16 20:13:42.514885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:34.938 [2024-12-16 20:13:42.514892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:34.938 [2024-12-16 20:13:42.514900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.938 [2024-12-16 20:13:42.514908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:34.938 [2024-12-16 20:13:42.514917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:20:34.939 [2024-12-16 20:13:42.514924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.939 [2024-12-16 20:13:42.528464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.939 [2024-12-16 20:13:42.528508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:34.939 [2024-12-16 20:13:42.528520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.507 ms 00:20:34.939 [2024-12-16 20:13:42.528528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.939 [2024-12-16 20:13:42.528757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.939 [2024-12-16 20:13:42.528766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:34.939 [2024-12-16 20:13:42.528775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:34.939 [2024-12-16 20:13:42.528782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.939 [2024-12-16 20:13:42.567546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.939 [2024-12-16 20:13:42.567745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.939 [2024-12-16 20:13:42.567767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.939 [2024-12-16 20:13:42.567776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.939 [2024-12-16 20:13:42.567839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.939 [2024-12-16 20:13:42.567848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.939 [2024-12-16 20:13:42.567856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.939 [2024-12-16 20:13:42.567865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.939 [2024-12-16 20:13:42.567946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.939 [2024-12-16 20:13:42.567957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.939 [2024-12-16 20:13:42.567966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.939 [2024-12-16 20:13:42.567974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.939 [2024-12-16 20:13:42.568014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.939 [2024-12-16 20:13:42.568023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.939 [2024-12-16 20:13:42.568032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.939 [2024-12-16 20:13:42.568039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.649391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.649442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.200 [2024-12-16 20:13:42.649455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.649464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.682261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.682487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.200 [2024-12-16 20:13:42.682511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.682520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.682590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.682607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.200 [2024-12-16 20:13:42.682616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.682624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.682665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.682675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.200 [2024-12-16 20:13:42.682684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.682693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.682800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.682817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.200 [2024-12-16 20:13:42.682829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.682837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.682870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.682879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:35.200 [2024-12-16 20:13:42.682888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.682896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.682938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.682947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.200 [2024-12-16 20:13:42.682958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.682966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.683014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.200 [2024-12-16 20:13:42.683023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.200 [2024-12-16 20:13:42.683031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.200 [2024-12-16 20:13:42.683039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.200 [2024-12-16 20:13:42.683172] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.059 ms, result 0 00:20:37.115 00:20:37.115 00:20:37.115 20:13:44 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:37.115 [2024-12-16 20:13:44.337335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:37.115 [2024-12-16 20:13:44.337469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74822 ] 00:20:37.115 [2024-12-16 20:13:44.491672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.115 [2024-12-16 20:13:44.681109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.376 [2024-12-16 20:13:44.966386] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:37.377 [2024-12-16 20:13:44.966748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:37.639 [2024-12-16 20:13:45.122039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.639 [2024-12-16 20:13:45.122099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:37.639 [2024-12-16 20:13:45.122115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:37.639 [2024-12-16 20:13:45.122126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.122180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.122191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.640 [2024-12-16 20:13:45.122200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:37.640 [2024-12-16 20:13:45.122208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.122228] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:37.640 [2024-12-16 20:13:45.123015] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:37.640 [2024-12-16 20:13:45.123040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.123048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.640 [2024-12-16 20:13:45.123057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:20:37.640 [2024-12-16 20:13:45.123065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.124809] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:37.640 [2024-12-16 20:13:45.139890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.139943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:37.640 [2024-12-16 20:13:45.139958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:20:37.640 [2024-12-16 20:13:45.139967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.140063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.140080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:37.640 [2024-12-16 20:13:45.140089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:37.640 [2024-12-16 20:13:45.140102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.148265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.148326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.640 [2024-12-16 20:13:45.148337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.054 ms 00:20:37.640 [2024-12-16 20:13:45.148346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.148444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.148454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.640 [2024-12-16 20:13:45.148463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:37.640 [2024-12-16 20:13:45.148471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.148517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.148526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:37.640 [2024-12-16 20:13:45.148534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:37.640 [2024-12-16 20:13:45.148542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.148573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.640 [2024-12-16 20:13:45.152782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.152819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.640 [2024-12-16 20:13:45.152831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.222 ms 00:20:37.640 [2024-12-16 20:13:45.152839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.152878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.152886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:37.640 [2024-12-16 20:13:45.152895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:37.640 [2024-12-16 20:13:45.152905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.152956] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:37.640 [2024-12-16 20:13:45.152979] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:37.640 [2024-12-16 20:13:45.153015] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:37.640 [2024-12-16 20:13:45.153031] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:37.640 [2024-12-16 20:13:45.153107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:37.640 [2024-12-16 20:13:45.153118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:37.640 [2024-12-16 20:13:45.153130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:37.640 [2024-12-16 20:13:45.153141] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153150] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153157] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:37.640 [2024-12-16 20:13:45.153165] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:37.640 [2024-12-16 20:13:45.153172] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:37.640 [2024-12-16 20:13:45.153180] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:37.640 [2024-12-16 20:13:45.153188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.153195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:37.640 [2024-12-16 20:13:45.153204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:20:37.640 [2024-12-16 20:13:45.153210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.153274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.640 [2024-12-16 20:13:45.153282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:37.640 [2024-12-16 20:13:45.153289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:37.640 [2024-12-16 20:13:45.153310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.640 [2024-12-16 20:13:45.153385] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:37.640 [2024-12-16 20:13:45.153396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:37.640 [2024-12-16 20:13:45.153405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:37.640 [2024-12-16 20:13:45.153428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:37.640 [2024-12-16 20:13:45.153449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.640 [2024-12-16 20:13:45.153462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:37.640 [2024-12-16 20:13:45.153472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:37.640 [2024-12-16 20:13:45.153479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.640 [2024-12-16 20:13:45.153487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:37.640 [2024-12-16 20:13:45.153493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:37.640 [2024-12-16 20:13:45.153500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:37.640 [2024-12-16 20:13:45.153521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:37.640 [2024-12-16 20:13:45.153528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:37.640 [2024-12-16 20:13:45.153541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:37.640 [2024-12-16 20:13:45.153547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153554] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:37.640 [2024-12-16 20:13:45.153561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:37.640 [2024-12-16 20:13:45.153581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:37.640 [2024-12-16 20:13:45.153600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:37.640 [2024-12-16 20:13:45.153619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:37.640 [2024-12-16 20:13:45.153638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.640 [2024-12-16 20:13:45.153650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:37.640 [2024-12-16 20:13:45.153656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:37.640 [2024-12-16 20:13:45.153662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.640 [2024-12-16 20:13:45.153668] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:37.640 [2024-12-16 20:13:45.153679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:37.640 [2024-12-16 20:13:45.153686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.640 [2024-12-16 20:13:45.153696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.640 [2024-12-16 20:13:45.153704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:37.641 [2024-12-16 20:13:45.153711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:37.641 [2024-12-16 20:13:45.153718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:37.641 [2024-12-16 20:13:45.153725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:37.641 [2024-12-16 20:13:45.153732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:37.641 [2024-12-16 20:13:45.153738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:37.641 [2024-12-16 20:13:45.153747] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:37.641 [2024-12-16 20:13:45.153756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.641 [2024-12-16 20:13:45.153764] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:37.641 [2024-12-16 20:13:45.153772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:37.641 [2024-12-16 20:13:45.153778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:37.641 [2024-12-16 20:13:45.153787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:37.641 [2024-12-16 20:13:45.153794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:37.641 [2024-12-16 20:13:45.153802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:37.641 [2024-12-16 20:13:45.153808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:37.641 [2024-12-16 20:13:45.153815] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:37.641 [2024-12-16 20:13:45.153822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:37.641 [2024-12-16 20:13:45.153829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:37.641 [2024-12-16 20:13:45.153836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:37.641 [2024-12-16 20:13:45.153844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:37.641 [2024-12-16 20:13:45.153851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:37.641 [2024-12-16 20:13:45.153858] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:37.641 [2024-12-16 20:13:45.153866] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.641 [2024-12-16 20:13:45.153875] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:37.641 [2024-12-16 20:13:45.153882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:37.641 [2024-12-16 20:13:45.153889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:37.641 [2024-12-16 20:13:45.153896] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:37.641 [2024-12-16 20:13:45.153905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.153912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:37.641 [2024-12-16 20:13:45.153919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:20:37.641 [2024-12-16 20:13:45.153926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.171930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.171979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.641 [2024-12-16 20:13:45.172002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.964 ms 00:20:37.641 [2024-12-16 20:13:45.172017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.172108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.172117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:37.641 [2024-12-16 20:13:45.172127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:37.641 [2024-12-16 20:13:45.172136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.215201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.215427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.641 [2024-12-16 20:13:45.215450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.012 ms 00:20:37.641 [2024-12-16 20:13:45.215459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.215511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.215522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.641 [2024-12-16 20:13:45.215531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.641 [2024-12-16 20:13:45.215538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.216124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.216156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.641 [2024-12-16 20:13:45.216167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:20:37.641 [2024-12-16 20:13:45.216181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.216326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.216336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.641 [2024-12-16 20:13:45.216345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:20:37.641 [2024-12-16 20:13:45.216352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.233343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.233376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.641 [2024-12-16 20:13:45.233387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.965 ms 00:20:37.641 [2024-12-16 20:13:45.233396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.247967] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:37.641 [2024-12-16 20:13:45.248041] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:37.641 [2024-12-16 20:13:45.248054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.248063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:37.641 [2024-12-16 20:13:45.248074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:20:37.641 [2024-12-16 20:13:45.248082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.641 [2024-12-16 20:13:45.274118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.641 [2024-12-16 20:13:45.274169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:37.641 [2024-12-16 20:13:45.274181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.982 ms 00:20:37.641 [2024-12-16 20:13:45.274189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.287518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.287565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:37.903 [2024-12-16 20:13:45.287577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.269 ms 00:20:37.903 [2024-12-16 20:13:45.287585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.300611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.300660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:37.903 [2024-12-16 20:13:45.300682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.976 ms 00:20:37.903 [2024-12-16 20:13:45.300690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.301085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.301098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:37.903 [2024-12-16 20:13:45.301107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:37.903 [2024-12-16 20:13:45.301115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.369295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.369361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:37.903 [2024-12-16 20:13:45.369376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.161 ms 00:20:37.903 [2024-12-16 20:13:45.369385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.381539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:37.903 [2024-12-16 20:13:45.384854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.385033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:37.903 [2024-12-16 20:13:45.385056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.407 ms 00:20:37.903 [2024-12-16 20:13:45.385072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.385155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.385167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:37.903 [2024-12-16 20:13:45.385177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:37.903 [2024-12-16 20:13:45.385185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.386585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.386633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:37.903 [2024-12-16 20:13:45.386644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:20:37.903 [2024-12-16 20:13:45.386652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.387956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.388011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:37.903 [2024-12-16 20:13:45.388022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:20:37.903 [2024-12-16 20:13:45.388030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.388066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.388074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:37.903 [2024-12-16 20:13:45.388089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.903 [2024-12-16 20:13:45.388096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.388133] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:37.903 [2024-12-16 20:13:45.388143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.388154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:37.903 [2024-12-16 20:13:45.388163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:37.903 [2024-12-16 20:13:45.388170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.414516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.414679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:37.903 [2024-12-16 20:13:45.414740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.326 ms 00:20:37.903 [2024-12-16 20:13:45.414763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.415174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.903 [2024-12-16 20:13:45.415361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:37.903 [2024-12-16 20:13:45.415382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:37.903 [2024-12-16 20:13:45.415392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.903 [2024-12-16 20:13:45.421869] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.828 ms, result 0 00:20:39.289  [2024-12-16T20:13:47.872Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T20:13:48.817Z] Copying: 24/1024 [MB] (11 MBps) [2024-12-16T20:13:49.761Z] Copying: 44/1024 [MB] (19 MBps) [2024-12-16T20:13:50.706Z] Copying: 69/1024 [MB] (25 MBps) [2024-12-16T20:13:51.650Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-16T20:13:53.038Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-16T20:13:53.610Z] Copying: 142/1024 [MB] (25 MBps) [2024-12-16T20:13:54.998Z] Copying: 162/1024 [MB] (20 MBps) [2024-12-16T20:13:55.941Z] Copying: 185/1024 [MB] (22 MBps) [2024-12-16T20:13:56.886Z] Copying: 219/1024 [MB] (34 MBps) [2024-12-16T20:13:57.830Z] Copying: 246/1024 [MB] (26 MBps) [2024-12-16T20:13:58.774Z] Copying: 261/1024 [MB] (15 MBps) [2024-12-16T20:13:59.719Z] Copying: 282/1024 [MB] (21 MBps) [2024-12-16T20:14:00.663Z] Copying: 303/1024 [MB] (20 MBps) [2024-12-16T20:14:02.048Z] Copying: 324/1024 [MB] (20 MBps) [2024-12-16T20:14:02.621Z] Copying: 345/1024 [MB] (21 MBps) [2024-12-16T20:14:03.611Z] Copying: 359/1024 [MB] (14 MBps) [2024-12-16T20:14:04.995Z] Copying: 374/1024 [MB] (14 MBps) [2024-12-16T20:14:05.935Z] Copying: 393/1024 [MB] (19 MBps) [2024-12-16T20:14:06.873Z] Copying: 405/1024 [MB] (11 MBps) [2024-12-16T20:14:07.814Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-16T20:14:08.754Z] Copying: 434/1024 [MB] (17 MBps) [2024-12-16T20:14:09.693Z] Copying: 450/1024 [MB] (16 MBps) [2024-12-16T20:14:10.633Z] Copying: 469/1024 [MB] (18 MBps) [2024-12-16T20:14:12.016Z] Copying: 486/1024 [MB] (17 MBps) [2024-12-16T20:14:12.957Z] Copying: 497/1024 [MB] (10 MBps) [2024-12-16T20:14:13.898Z] Copying: 507/1024 [MB] (10 MBps) [2024-12-16T20:14:14.838Z] Copying: 518/1024 [MB] (10 MBps) [2024-12-16T20:14:15.779Z] Copying: 529/1024 [MB] (10 MBps) [2024-12-16T20:14:16.720Z] Copying: 539/1024 [MB] (10 MBps) [2024-12-16T20:14:17.661Z] Copying: 550/1024 [MB] (10 MBps) [2024-12-16T20:14:19.044Z] Copying: 561/1024 [MB] (10 MBps) [2024-12-16T20:14:19.616Z] Copying: 576/1024 [MB] (15 MBps) [2024-12-16T20:14:21.002Z] Copying: 595/1024 [MB] (18 MBps) [2024-12-16T20:14:21.944Z] Copying: 614/1024 [MB] (19 MBps) [2024-12-16T20:14:22.887Z] Copying: 629/1024 [MB] (15 MBps) [2024-12-16T20:14:23.831Z] Copying: 644/1024 [MB] (15 MBps) [2024-12-16T20:14:24.773Z] Copying: 664/1024 [MB] (19 MBps) [2024-12-16T20:14:25.714Z] Copying: 681/1024 [MB] (16 MBps) [2024-12-16T20:14:26.656Z] Copying: 700/1024 [MB] (19 MBps) [2024-12-16T20:14:28.043Z] Copying: 716/1024 [MB] (15 MBps) [2024-12-16T20:14:28.616Z] Copying: 733/1024 [MB] (17 MBps) [2024-12-16T20:14:29.640Z] Copying: 756/1024 [MB] (22 MBps) [2024-12-16T20:14:31.028Z] Copying: 768/1024 [MB] (12 MBps) [2024-12-16T20:14:31.971Z] Copying: 783/1024 [MB] (14 MBps) [2024-12-16T20:14:32.915Z] Copying: 796/1024 [MB] (13 MBps) [2024-12-16T20:14:33.860Z] Copying: 816/1024 [MB] (19 MBps) [2024-12-16T20:14:34.803Z] Copying: 834/1024 [MB] (17 MBps) [2024-12-16T20:14:35.745Z] Copying: 845/1024 [MB] (10 MBps) [2024-12-16T20:14:36.679Z] Copying: 855/1024 [MB] (10 MBps) [2024-12-16T20:14:37.618Z] Copying: 867/1024 [MB] (12 MBps) [2024-12-16T20:14:39.005Z] Copying: 879/1024 [MB] (12 MBps) [2024-12-16T20:14:39.947Z] Copying: 890/1024 [MB] (11 MBps) [2024-12-16T20:14:40.891Z] Copying: 904/1024 [MB] (13 MBps) [2024-12-16T20:14:41.830Z] Copying: 918/1024 [MB] (14 MBps) [2024-12-16T20:14:42.773Z] Copying: 930/1024 [MB] (11 MBps) [2024-12-16T20:14:43.713Z] Copying: 955/1024 [MB] (25 MBps) [2024-12-16T20:14:44.654Z] Copying: 976/1024 [MB] (20 MBps) [2024-12-16T20:14:46.033Z] Copying: 994/1024 [MB] (18 MBps) [2024-12-16T20:14:46.033Z] Copying: 1013/1024 [MB] (18 MBps) [2024-12-16T20:14:46.294Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 20:14:46.292136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.654 [2024-12-16 20:14:46.292262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:38.654 [2024-12-16 20:14:46.292346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:38.654 [2024-12-16 20:14:46.292358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.654 [2024-12-16 20:14:46.292393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:38.914 [2024-12-16 20:14:46.296696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-16 20:14:46.296768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:38.914 [2024-12-16 20:14:46.296783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:21:38.914 [2024-12-16 20:14:46.296795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-16 20:14:46.297148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-16 20:14:46.297164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:38.914 [2024-12-16 20:14:46.297182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:38.914 [2024-12-16 20:14:46.297194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-16 20:14:46.305762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-16 20:14:46.305812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:38.914 [2024-12-16 20:14:46.305824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.546 ms 00:21:38.914 [2024-12-16 20:14:46.305833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-16 20:14:46.312548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-16 20:14:46.312595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:38.914 [2024-12-16 20:14:46.312606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:21:38.914 [2024-12-16 20:14:46.312628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-16 20:14:46.339727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-16 20:14:46.339925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:38.914 [2024-12-16 20:14:46.339950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.042 ms 00:21:38.914 [2024-12-16 20:14:46.339958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.914 [2024-12-16 20:14:46.356638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.914 [2024-12-16 20:14:46.356687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:38.914 [2024-12-16 20:14:46.356699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.596 ms 00:21:38.914 [2024-12-16 20:14:46.356708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.175 [2024-12-16 20:14:46.725408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.175 [2024-12-16 20:14:46.725458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:39.175 [2024-12-16 20:14:46.725471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 368.647 ms 00:21:39.175 [2024-12-16 20:14:46.725479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.175 [2024-12-16 20:14:46.751428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.175 [2024-12-16 20:14:46.751476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:39.175 [2024-12-16 20:14:46.751488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.926 ms 00:21:39.175 [2024-12-16 20:14:46.751496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.175 [2024-12-16 20:14:46.775994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.175 [2024-12-16 20:14:46.776027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:39.175 [2024-12-16 20:14:46.776038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.455 ms 00:21:39.175 [2024-12-16 20:14:46.776057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.175 [2024-12-16 20:14:46.799020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.175 [2024-12-16 20:14:46.799157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:39.175 [2024-12-16 20:14:46.799174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.930 ms 00:21:39.175 [2024-12-16 20:14:46.799181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.437 [2024-12-16 20:14:46.822213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.437 [2024-12-16 20:14:46.822351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:39.437 [2024-12-16 20:14:46.822367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.969 ms 00:21:39.437 [2024-12-16 20:14:46.822374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.437 [2024-12-16 20:14:46.822654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:39.437 [2024-12-16 20:14:46.822691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:21:39.437 [2024-12-16 20:14:46.822703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:39.437 [2024-12-16 20:14:46.822712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.822993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:39.438 [2024-12-16 20:14:46.823395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:39.439 [2024-12-16 20:14:46.823467] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:39.439 [2024-12-16 20:14:46.823477] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a332268c-feec-4876-952a-9f7e0b8fbbf4 00:21:39.439 [2024-12-16 20:14:46.823485] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:21:39.439 [2024-12-16 20:14:46.823492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 37312 00:21:39.439 [2024-12-16 20:14:46.823499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 36352 00:21:39.439 [2024-12-16 20:14:46.823512] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0264 00:21:39.439 [2024-12-16 20:14:46.823519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:39.439 [2024-12-16 20:14:46.823527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:39.439 [2024-12-16 20:14:46.823534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:39.439 [2024-12-16 20:14:46.823540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:39.439 [2024-12-16 20:14:46.823553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:39.439 [2024-12-16 20:14:46.823561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.439 [2024-12-16 20:14:46.823569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:39.439 [2024-12-16 20:14:46.823578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:21:39.439 [2024-12-16 20:14:46.823585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.836089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.439 [2024-12-16 20:14:46.836135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:39.439 [2024-12-16 20:14:46.836145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.469 ms 00:21:39.439 [2024-12-16 20:14:46.836153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.836378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.439 [2024-12-16 20:14:46.836388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:39.439 [2024-12-16 20:14:46.836397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:21:39.439 [2024-12-16 20:14:46.836405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.873397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.873559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.439 [2024-12-16 20:14:46.873581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.873589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.873644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.873652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.439 [2024-12-16 20:14:46.873660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.873674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.873736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.873749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.439 [2024-12-16 20:14:46.873757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.873765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.873780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.873787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.439 [2024-12-16 20:14:46.873794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.873801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.953677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.953731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:39.439 [2024-12-16 20:14:46.953743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.953751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.986526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.986570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.439 [2024-12-16 20:14:46.986582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.986591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.986654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.986664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.439 [2024-12-16 20:14:46.986679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.986687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.986728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.986737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.439 [2024-12-16 20:14:46.986746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.986754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.986851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.986861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.439 [2024-12-16 20:14:46.986870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.986881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.986915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.986924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:39.439 [2024-12-16 20:14:46.986933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.986941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.986983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.986991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.439 [2024-12-16 20:14:46.987000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.987011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.987060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.439 [2024-12-16 20:14:46.987069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.439 [2024-12-16 20:14:46.987077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.439 [2024-12-16 20:14:46.987085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.439 [2024-12-16 20:14:46.987219] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 695.086 ms, result 0 00:21:40.381 00:21:40.381 00:21:40.381 20:14:47 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:42.928 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:42.928 20:14:50 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:42.928 20:14:50 -- ftl/restore.sh@85 -- # restore_kill 00:21:42.928 20:14:50 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:42.928 20:14:50 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:42.928 20:14:50 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:42.928 Process with pid 72767 is not found 00:21:42.928 Remove shared memory files 00:21:42.929 20:14:50 -- ftl/restore.sh@32 -- # killprocess 72767 00:21:42.929 20:14:50 -- common/autotest_common.sh@936 -- # '[' -z 72767 ']' 00:21:42.929 20:14:50 -- common/autotest_common.sh@940 -- # kill -0 72767 00:21:42.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72767) - No such process 00:21:42.929 20:14:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72767 is not found' 00:21:42.929 20:14:50 -- ftl/restore.sh@33 -- # remove_shm 00:21:42.929 20:14:50 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:42.929 20:14:50 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:42.929 20:14:50 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:42.929 20:14:50 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:42.929 20:14:50 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:42.929 20:14:50 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:42.929 ************************************ 00:21:42.929 END TEST ftl_restore 00:21:42.929 ************************************ 00:21:42.929 00:21:42.929 real 4m18.484s 00:21:42.929 user 4m5.776s 00:21:42.929 sys 0m12.560s 00:21:42.929 20:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:42.929 20:14:50 -- common/autotest_common.sh@10 -- # set +x 00:21:42.929 20:14:50 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:42.929 20:14:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:42.929 20:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:42.929 20:14:50 -- common/autotest_common.sh@10 -- # set +x 00:21:42.929 ************************************ 00:21:42.929 START TEST ftl_dirty_shutdown 00:21:42.929 ************************************ 00:21:42.929 20:14:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:42.929 * Looking for test storage... 00:21:42.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:42.929 20:14:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:42.929 20:14:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:42.929 20:14:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:42.929 20:14:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:42.929 20:14:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:42.929 20:14:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:42.929 20:14:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:42.929 20:14:50 -- scripts/common.sh@335 -- # IFS=.-: 00:21:42.929 20:14:50 -- scripts/common.sh@335 -- # read -ra ver1 00:21:42.929 20:14:50 -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.929 20:14:50 -- scripts/common.sh@336 -- # read -ra ver2 00:21:42.929 20:14:50 -- scripts/common.sh@337 -- # local 'op=<' 00:21:42.929 20:14:50 -- scripts/common.sh@339 -- # ver1_l=2 00:21:42.929 20:14:50 -- scripts/common.sh@340 -- # ver2_l=1 00:21:42.929 20:14:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:42.929 20:14:50 -- scripts/common.sh@343 -- # case "$op" in 00:21:42.929 20:14:50 -- scripts/common.sh@344 -- # : 1 00:21:42.929 20:14:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:42.929 20:14:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.929 20:14:50 -- scripts/common.sh@364 -- # decimal 1 00:21:42.929 20:14:50 -- scripts/common.sh@352 -- # local d=1 00:21:42.929 20:14:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.929 20:14:50 -- scripts/common.sh@354 -- # echo 1 00:21:42.929 20:14:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:42.929 20:14:50 -- scripts/common.sh@365 -- # decimal 2 00:21:42.929 20:14:50 -- scripts/common.sh@352 -- # local d=2 00:21:42.929 20:14:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.929 20:14:50 -- scripts/common.sh@354 -- # echo 2 00:21:42.929 20:14:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:42.929 20:14:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:42.929 20:14:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:42.929 20:14:50 -- scripts/common.sh@367 -- # return 0 00:21:42.929 20:14:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.929 20:14:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:42.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.929 --rc genhtml_branch_coverage=1 00:21:42.929 --rc genhtml_function_coverage=1 00:21:42.929 --rc genhtml_legend=1 00:21:42.929 --rc geninfo_all_blocks=1 00:21:42.929 --rc geninfo_unexecuted_blocks=1 00:21:42.929 00:21:42.929 ' 00:21:42.929 20:14:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:42.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.929 --rc genhtml_branch_coverage=1 00:21:42.929 --rc genhtml_function_coverage=1 00:21:42.929 --rc genhtml_legend=1 00:21:42.929 --rc geninfo_all_blocks=1 00:21:42.929 --rc geninfo_unexecuted_blocks=1 00:21:42.929 00:21:42.929 ' 00:21:42.929 20:14:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:42.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.929 --rc genhtml_branch_coverage=1 00:21:42.929 --rc genhtml_function_coverage=1 00:21:42.929 --rc genhtml_legend=1 00:21:42.929 --rc geninfo_all_blocks=1 00:21:42.929 --rc geninfo_unexecuted_blocks=1 00:21:42.929 00:21:42.929 ' 00:21:42.929 20:14:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:42.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.929 --rc genhtml_branch_coverage=1 00:21:42.929 --rc genhtml_function_coverage=1 00:21:42.929 --rc genhtml_legend=1 00:21:42.929 --rc geninfo_all_blocks=1 00:21:42.929 --rc geninfo_unexecuted_blocks=1 00:21:42.929 00:21:42.929 ' 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:42.929 20:14:50 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:42.929 20:14:50 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:42.929 20:14:50 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:42.929 20:14:50 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:42.929 20:14:50 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:42.929 20:14:50 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.929 20:14:50 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:42.929 20:14:50 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:42.929 20:14:50 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.929 20:14:50 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.929 20:14:50 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:42.929 20:14:50 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:42.929 20:14:50 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:42.929 20:14:50 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:42.929 20:14:50 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:42.929 20:14:50 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:42.929 20:14:50 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.929 20:14:50 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.929 20:14:50 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:42.929 20:14:50 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:42.929 20:14:50 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:42.929 20:14:50 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:42.929 20:14:50 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:42.929 20:14:50 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:42.929 20:14:50 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:42.929 20:14:50 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:42.929 20:14:50 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:42.929 20:14:50 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75568 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75568 00:21:42.929 20:14:50 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:42.929 20:14:50 -- common/autotest_common.sh@829 -- # '[' -z 75568 ']' 00:21:42.929 20:14:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.929 20:14:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.929 20:14:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.929 20:14:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.929 20:14:50 -- common/autotest_common.sh@10 -- # set +x 00:21:43.191 [2024-12-16 20:14:50.568825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:43.191 [2024-12-16 20:14:50.568986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75568 ] 00:21:43.191 [2024-12-16 20:14:50.726211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.453 [2024-12-16 20:14:50.948335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.453 [2024-12-16 20:14:50.948588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.840 20:14:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.840 20:14:52 -- common/autotest_common.sh@862 -- # return 0 00:21:44.840 20:14:52 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:21:44.840 20:14:52 -- ftl/common.sh@54 -- # local name=nvme0 00:21:44.840 20:14:52 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:44.840 20:14:52 -- ftl/common.sh@56 -- # local size=103424 00:21:44.840 20:14:52 -- ftl/common.sh@59 -- # local base_bdev 00:21:44.840 20:14:52 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:21:44.840 20:14:52 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:44.840 20:14:52 -- ftl/common.sh@62 -- # local base_size 00:21:44.840 20:14:52 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:44.840 20:14:52 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:21:44.840 20:14:52 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:44.840 20:14:52 -- common/autotest_common.sh@1369 -- # local bs 00:21:44.840 20:14:52 -- common/autotest_common.sh@1370 -- # local nb 00:21:44.840 20:14:52 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:45.101 20:14:52 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:45.101 { 00:21:45.101 "name": "nvme0n1", 00:21:45.101 "aliases": [ 00:21:45.101 "c4698c94-c8e9-42b8-8b74-4b11038c444e" 00:21:45.101 ], 00:21:45.101 "product_name": "NVMe disk", 00:21:45.101 "block_size": 4096, 00:21:45.101 "num_blocks": 1310720, 00:21:45.101 "uuid": "c4698c94-c8e9-42b8-8b74-4b11038c444e", 00:21:45.101 "assigned_rate_limits": { 00:21:45.101 "rw_ios_per_sec": 0, 00:21:45.101 "rw_mbytes_per_sec": 0, 00:21:45.101 "r_mbytes_per_sec": 0, 00:21:45.101 "w_mbytes_per_sec": 0 00:21:45.101 }, 00:21:45.101 "claimed": true, 00:21:45.102 "claim_type": "read_many_write_one", 00:21:45.102 "zoned": false, 00:21:45.102 "supported_io_types": { 00:21:45.102 "read": true, 00:21:45.102 "write": true, 00:21:45.102 "unmap": true, 00:21:45.102 "write_zeroes": true, 00:21:45.102 "flush": true, 00:21:45.102 "reset": true, 00:21:45.102 "compare": true, 00:21:45.102 "compare_and_write": false, 00:21:45.102 "abort": true, 00:21:45.102 "nvme_admin": true, 00:21:45.102 "nvme_io": true 00:21:45.102 }, 00:21:45.102 "driver_specific": { 00:21:45.102 "nvme": [ 00:21:45.102 { 00:21:45.102 "pci_address": "0000:00:07.0", 00:21:45.102 "trid": { 00:21:45.102 "trtype": "PCIe", 00:21:45.102 "traddr": "0000:00:07.0" 00:21:45.102 }, 00:21:45.102 "ctrlr_data": { 00:21:45.102 "cntlid": 0, 00:21:45.102 "vendor_id": "0x1b36", 00:21:45.102 "model_number": "QEMU NVMe Ctrl", 00:21:45.102 "serial_number": "12341", 00:21:45.102 "firmware_revision": "8.0.0", 00:21:45.102 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:45.102 "oacs": { 00:21:45.102 "security": 0, 00:21:45.102 "format": 1, 00:21:45.102 "firmware": 0, 00:21:45.102 "ns_manage": 1 00:21:45.102 }, 00:21:45.102 "multi_ctrlr": false, 00:21:45.102 "ana_reporting": false 00:21:45.102 }, 00:21:45.102 "vs": { 00:21:45.102 "nvme_version": "1.4" 00:21:45.102 }, 00:21:45.102 "ns_data": { 00:21:45.102 "id": 1, 00:21:45.102 "can_share": false 00:21:45.102 } 00:21:45.102 } 00:21:45.102 ], 00:21:45.102 "mp_policy": "active_passive" 00:21:45.102 } 00:21:45.102 } 00:21:45.102 ]' 00:21:45.102 20:14:52 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:45.102 20:14:52 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:45.102 20:14:52 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:45.102 20:14:52 -- common/autotest_common.sh@1373 -- # nb=1310720 00:21:45.102 20:14:52 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:21:45.102 20:14:52 -- common/autotest_common.sh@1377 -- # echo 5120 00:21:45.102 20:14:52 -- ftl/common.sh@63 -- # base_size=5120 00:21:45.102 20:14:52 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:45.102 20:14:52 -- ftl/common.sh@67 -- # clear_lvols 00:21:45.102 20:14:52 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:45.102 20:14:52 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:45.363 20:14:52 -- ftl/common.sh@28 -- # stores=5bb754d4-b3c8-42c4-ad97-fa9d29a7745a 00:21:45.363 20:14:52 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:45.363 20:14:52 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bb754d4-b3c8-42c4-ad97-fa9d29a7745a 00:21:45.623 20:14:53 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:45.885 20:14:53 -- ftl/common.sh@68 -- # lvs=1e3d1e40-9d28-4548-a7c7-f066ad5b8d26 00:21:45.885 20:14:53 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1e3d1e40-9d28-4548-a7c7-f066ad5b8d26 00:21:45.885 20:14:53 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:45.885 20:14:53 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:21:45.885 20:14:53 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:45.885 20:14:53 -- ftl/common.sh@35 -- # local name=nvc0 00:21:45.885 20:14:53 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:45.885 20:14:53 -- ftl/common.sh@37 -- # local base_bdev=e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:45.885 20:14:53 -- ftl/common.sh@38 -- # local cache_size= 00:21:45.885 20:14:53 -- ftl/common.sh@41 -- # get_bdev_size e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:45.885 20:14:53 -- common/autotest_common.sh@1367 -- # local bdev_name=e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:45.885 20:14:53 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:45.885 20:14:53 -- common/autotest_common.sh@1369 -- # local bs 00:21:45.885 20:14:53 -- common/autotest_common.sh@1370 -- # local nb 00:21:45.885 20:14:53 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:46.146 20:14:53 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:46.146 { 00:21:46.146 "name": "e17728e9-fd1f-40f3-99d8-71f68f732f04", 00:21:46.146 "aliases": [ 00:21:46.146 "lvs/nvme0n1p0" 00:21:46.146 ], 00:21:46.147 "product_name": "Logical Volume", 00:21:46.147 "block_size": 4096, 00:21:46.147 "num_blocks": 26476544, 00:21:46.147 "uuid": "e17728e9-fd1f-40f3-99d8-71f68f732f04", 00:21:46.147 "assigned_rate_limits": { 00:21:46.147 "rw_ios_per_sec": 0, 00:21:46.147 "rw_mbytes_per_sec": 0, 00:21:46.147 "r_mbytes_per_sec": 0, 00:21:46.147 "w_mbytes_per_sec": 0 00:21:46.147 }, 00:21:46.147 "claimed": false, 00:21:46.147 "zoned": false, 00:21:46.147 "supported_io_types": { 00:21:46.147 "read": true, 00:21:46.147 "write": true, 00:21:46.147 "unmap": true, 00:21:46.147 "write_zeroes": true, 00:21:46.147 "flush": false, 00:21:46.147 "reset": true, 00:21:46.147 "compare": false, 00:21:46.147 "compare_and_write": false, 00:21:46.147 "abort": false, 00:21:46.147 "nvme_admin": false, 00:21:46.147 "nvme_io": false 00:21:46.147 }, 00:21:46.147 "driver_specific": { 00:21:46.147 "lvol": { 00:21:46.147 "lvol_store_uuid": "1e3d1e40-9d28-4548-a7c7-f066ad5b8d26", 00:21:46.147 "base_bdev": "nvme0n1", 00:21:46.147 "thin_provision": true, 00:21:46.147 "snapshot": false, 00:21:46.147 "clone": false, 00:21:46.147 "esnap_clone": false 00:21:46.147 } 00:21:46.147 } 00:21:46.147 } 00:21:46.147 ]' 00:21:46.147 20:14:53 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:46.147 20:14:53 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:46.147 20:14:53 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:46.408 20:14:53 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:46.408 20:14:53 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:46.408 20:14:53 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:46.408 20:14:53 -- ftl/common.sh@41 -- # local base_size=5171 00:21:46.408 20:14:53 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:46.408 20:14:53 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:21:46.668 20:14:54 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:46.668 20:14:54 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:46.668 20:14:54 -- ftl/common.sh@48 -- # get_bdev_size e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:46.668 20:14:54 -- common/autotest_common.sh@1367 -- # local bdev_name=e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:46.668 20:14:54 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:46.668 20:14:54 -- common/autotest_common.sh@1369 -- # local bs 00:21:46.668 20:14:54 -- common/autotest_common.sh@1370 -- # local nb 00:21:46.668 20:14:54 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:46.668 20:14:54 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:46.668 { 00:21:46.668 "name": "e17728e9-fd1f-40f3-99d8-71f68f732f04", 00:21:46.668 "aliases": [ 00:21:46.668 "lvs/nvme0n1p0" 00:21:46.668 ], 00:21:46.668 "product_name": "Logical Volume", 00:21:46.668 "block_size": 4096, 00:21:46.668 "num_blocks": 26476544, 00:21:46.668 "uuid": "e17728e9-fd1f-40f3-99d8-71f68f732f04", 00:21:46.668 "assigned_rate_limits": { 00:21:46.668 "rw_ios_per_sec": 0, 00:21:46.668 "rw_mbytes_per_sec": 0, 00:21:46.668 "r_mbytes_per_sec": 0, 00:21:46.668 "w_mbytes_per_sec": 0 00:21:46.668 }, 00:21:46.668 "claimed": false, 00:21:46.668 "zoned": false, 00:21:46.668 "supported_io_types": { 00:21:46.668 "read": true, 00:21:46.668 "write": true, 00:21:46.668 "unmap": true, 00:21:46.668 "write_zeroes": true, 00:21:46.668 "flush": false, 00:21:46.668 "reset": true, 00:21:46.668 "compare": false, 00:21:46.668 "compare_and_write": false, 00:21:46.668 "abort": false, 00:21:46.668 "nvme_admin": false, 00:21:46.668 "nvme_io": false 00:21:46.668 }, 00:21:46.668 "driver_specific": { 00:21:46.668 "lvol": { 00:21:46.668 "lvol_store_uuid": "1e3d1e40-9d28-4548-a7c7-f066ad5b8d26", 00:21:46.668 "base_bdev": "nvme0n1", 00:21:46.668 "thin_provision": true, 00:21:46.668 "snapshot": false, 00:21:46.668 "clone": false, 00:21:46.668 "esnap_clone": false 00:21:46.668 } 00:21:46.668 } 00:21:46.668 } 00:21:46.668 ]' 00:21:46.668 20:14:54 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:46.668 20:14:54 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:46.668 20:14:54 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:46.939 20:14:54 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:46.939 20:14:54 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:46.939 20:14:54 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:46.939 20:14:54 -- ftl/common.sh@48 -- # cache_size=5171 00:21:46.939 20:14:54 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:46.939 20:14:54 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:46.939 20:14:54 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:46.939 20:14:54 -- common/autotest_common.sh@1367 -- # local bdev_name=e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:46.939 20:14:54 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:46.939 20:14:54 -- common/autotest_common.sh@1369 -- # local bs 00:21:46.939 20:14:54 -- common/autotest_common.sh@1370 -- # local nb 00:21:46.939 20:14:54 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e17728e9-fd1f-40f3-99d8-71f68f732f04 00:21:47.246 20:14:54 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:47.246 { 00:21:47.246 "name": "e17728e9-fd1f-40f3-99d8-71f68f732f04", 00:21:47.246 "aliases": [ 00:21:47.246 "lvs/nvme0n1p0" 00:21:47.246 ], 00:21:47.246 "product_name": "Logical Volume", 00:21:47.246 "block_size": 4096, 00:21:47.246 "num_blocks": 26476544, 00:21:47.246 "uuid": "e17728e9-fd1f-40f3-99d8-71f68f732f04", 00:21:47.246 "assigned_rate_limits": { 00:21:47.246 "rw_ios_per_sec": 0, 00:21:47.246 "rw_mbytes_per_sec": 0, 00:21:47.246 "r_mbytes_per_sec": 0, 00:21:47.246 "w_mbytes_per_sec": 0 00:21:47.246 }, 00:21:47.246 "claimed": false, 00:21:47.246 "zoned": false, 00:21:47.246 "supported_io_types": { 00:21:47.246 "read": true, 00:21:47.246 "write": true, 00:21:47.246 "unmap": true, 00:21:47.246 "write_zeroes": true, 00:21:47.246 "flush": false, 00:21:47.246 "reset": true, 00:21:47.246 "compare": false, 00:21:47.246 "compare_and_write": false, 00:21:47.246 "abort": false, 00:21:47.246 "nvme_admin": false, 00:21:47.246 "nvme_io": false 00:21:47.246 }, 00:21:47.246 "driver_specific": { 00:21:47.246 "lvol": { 00:21:47.246 "lvol_store_uuid": "1e3d1e40-9d28-4548-a7c7-f066ad5b8d26", 00:21:47.246 "base_bdev": "nvme0n1", 00:21:47.246 "thin_provision": true, 00:21:47.246 "snapshot": false, 00:21:47.246 "clone": false, 00:21:47.246 "esnap_clone": false 00:21:47.246 } 00:21:47.246 } 00:21:47.246 } 00:21:47.246 ]' 00:21:47.246 20:14:54 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:47.246 20:14:54 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:47.246 20:14:54 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:47.246 20:14:54 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:47.246 20:14:54 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:47.246 20:14:54 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:47.246 20:14:54 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:47.246 20:14:54 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e17728e9-fd1f-40f3-99d8-71f68f732f04 --l2p_dram_limit 10' 00:21:47.246 20:14:54 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:47.246 20:14:54 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:21:47.246 20:14:54 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:47.246 20:14:54 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e17728e9-fd1f-40f3-99d8-71f68f732f04 --l2p_dram_limit 10 -c nvc0n1p0 00:21:47.520 [2024-12-16 20:14:54.963163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.963203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:47.520 [2024-12-16 20:14:54.963217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:47.520 [2024-12-16 20:14:54.963225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.963272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.963280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.520 [2024-12-16 20:14:54.963288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:47.520 [2024-12-16 20:14:54.963295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.963327] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:47.520 [2024-12-16 20:14:54.963931] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:47.520 [2024-12-16 20:14:54.963951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.963958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.520 [2024-12-16 20:14:54.963966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:21:47.520 [2024-12-16 20:14:54.963973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.964030] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 695c319e-f996-4308-bf6d-88949793fc08 00:21:47.520 [2024-12-16 20:14:54.964973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.964992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:47.520 [2024-12-16 20:14:54.965000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:47.520 [2024-12-16 20:14:54.965008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.969649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.969677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.520 [2024-12-16 20:14:54.969686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:21:47.520 [2024-12-16 20:14:54.969693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.969758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.969768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.520 [2024-12-16 20:14:54.969775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:47.520 [2024-12-16 20:14:54.969785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.969826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.969836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:47.520 [2024-12-16 20:14:54.969843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:47.520 [2024-12-16 20:14:54.969851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.969869] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:47.520 [2024-12-16 20:14:54.972815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.972919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.520 [2024-12-16 20:14:54.972935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:21:47.520 [2024-12-16 20:14:54.972941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.972975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.972983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:47.520 [2024-12-16 20:14:54.972991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:47.520 [2024-12-16 20:14:54.972997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.973012] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:47.520 [2024-12-16 20:14:54.973099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:47.520 [2024-12-16 20:14:54.973111] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:47.520 [2024-12-16 20:14:54.973119] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:47.520 [2024-12-16 20:14:54.973129] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:47.520 [2024-12-16 20:14:54.973136] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:47.520 [2024-12-16 20:14:54.973146] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:47.520 [2024-12-16 20:14:54.973159] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:47.520 [2024-12-16 20:14:54.973166] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:47.520 [2024-12-16 20:14:54.973172] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:47.520 [2024-12-16 20:14:54.973180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.973187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:47.520 [2024-12-16 20:14:54.973195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:21:47.520 [2024-12-16 20:14:54.973201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-12-16 20:14:54.973249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.520 [2024-12-16 20:14:54.973256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:47.520 [2024-12-16 20:14:54.973263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:47.521 [2024-12-16 20:14:54.973270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:54.973342] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:47.521 [2024-12-16 20:14:54.973350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:47.521 [2024-12-16 20:14:54.973358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:47.521 [2024-12-16 20:14:54.973379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:47.521 [2024-12-16 20:14:54.973399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.521 [2024-12-16 20:14:54.973412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:47.521 [2024-12-16 20:14:54.973418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:47.521 [2024-12-16 20:14:54.973427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.521 [2024-12-16 20:14:54.973433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:47.521 [2024-12-16 20:14:54.973440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:47.521 [2024-12-16 20:14:54.973446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:47.521 [2024-12-16 20:14:54.973460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:47.521 [2024-12-16 20:14:54.973467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:47.521 [2024-12-16 20:14:54.973480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:47.521 [2024-12-16 20:14:54.973485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:47.521 [2024-12-16 20:14:54.973498] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973510] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:47.521 [2024-12-16 20:14:54.973517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:47.521 [2024-12-16 20:14:54.973535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:47.521 [2024-12-16 20:14:54.973556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:47.521 [2024-12-16 20:14:54.973574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.521 [2024-12-16 20:14:54.973586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:47.521 [2024-12-16 20:14:54.973594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:47.521 [2024-12-16 20:14:54.973600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.521 [2024-12-16 20:14:54.973608] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:47.521 [2024-12-16 20:14:54.973614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:47.521 [2024-12-16 20:14:54.973621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.521 [2024-12-16 20:14:54.973636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:47.521 [2024-12-16 20:14:54.973643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:47.521 [2024-12-16 20:14:54.973650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:47.521 [2024-12-16 20:14:54.973656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:47.521 [2024-12-16 20:14:54.973664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:47.521 [2024-12-16 20:14:54.973670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:47.521 [2024-12-16 20:14:54.973678] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:47.521 [2024-12-16 20:14:54.973686] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.521 [2024-12-16 20:14:54.973694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:47.521 [2024-12-16 20:14:54.973701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:47.521 [2024-12-16 20:14:54.973708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:47.521 [2024-12-16 20:14:54.973714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:47.521 [2024-12-16 20:14:54.973721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:47.521 [2024-12-16 20:14:54.973728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:47.521 [2024-12-16 20:14:54.973735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:47.521 [2024-12-16 20:14:54.973742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:47.521 [2024-12-16 20:14:54.973749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:47.521 [2024-12-16 20:14:54.973756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:47.521 [2024-12-16 20:14:54.973763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:47.521 [2024-12-16 20:14:54.973769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:47.521 [2024-12-16 20:14:54.973779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:47.521 [2024-12-16 20:14:54.973785] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:47.521 [2024-12-16 20:14:54.973794] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.521 [2024-12-16 20:14:54.973801] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:47.521 [2024-12-16 20:14:54.973809] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:47.521 [2024-12-16 20:14:54.973815] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:47.521 [2024-12-16 20:14:54.973822] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:47.521 [2024-12-16 20:14:54.973828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:54.973836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:47.521 [2024-12-16 20:14:54.973842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:21:47.521 [2024-12-16 20:14:54.973849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:54.985897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:54.985992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:47.521 [2024-12-16 20:14:54.986033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.006 ms 00:21:47.521 [2024-12-16 20:14:54.986053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:54.986132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:54.986151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:47.521 [2024-12-16 20:14:54.986170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:47.521 [2024-12-16 20:14:54.986186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:55.009954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:55.010048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:47.521 [2024-12-16 20:14:55.010089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.724 ms 00:21:47.521 [2024-12-16 20:14:55.010110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:55.010144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:55.010164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.521 [2024-12-16 20:14:55.010180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:47.521 [2024-12-16 20:14:55.010198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:55.010515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:55.010548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.521 [2024-12-16 20:14:55.010565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:47.521 [2024-12-16 20:14:55.010581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:55.010677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:55.010697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.521 [2024-12-16 20:14:55.010750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:47.521 [2024-12-16 20:14:55.010771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.521 [2024-12-16 20:14:55.022738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.521 [2024-12-16 20:14:55.022821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.521 [2024-12-16 20:14:55.022859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.943 ms 00:21:47.522 [2024-12-16 20:14:55.022878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.522 [2024-12-16 20:14:55.031897] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:47.522 [2024-12-16 20:14:55.034207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.522 [2024-12-16 20:14:55.034286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:47.522 [2024-12-16 20:14:55.034342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.261 ms 00:21:47.522 [2024-12-16 20:14:55.034360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.522 [2024-12-16 20:14:55.097357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.522 [2024-12-16 20:14:55.097467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:47.522 [2024-12-16 20:14:55.097514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.965 ms 00:21:47.522 [2024-12-16 20:14:55.097534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.522 [2024-12-16 20:14:55.097575] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:21:47.522 [2024-12-16 20:14:55.097604] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:21:51.729 [2024-12-16 20:14:59.314989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.729 [2024-12-16 20:14:59.315317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:51.729 [2024-12-16 20:14:59.315526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4217.391 ms 00:21:51.729 [2024-12-16 20:14:59.315560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.729 [2024-12-16 20:14:59.315795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.729 [2024-12-16 20:14:59.316042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:51.729 [2024-12-16 20:14:59.316079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:21:51.729 [2024-12-16 20:14:59.316100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.729 [2024-12-16 20:14:59.342481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.729 [2024-12-16 20:14:59.342673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:51.729 [2024-12-16 20:14:59.342756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.287 ms 00:21:51.730 [2024-12-16 20:14:59.342783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.730 [2024-12-16 20:14:59.368183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.730 [2024-12-16 20:14:59.368396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:51.730 [2024-12-16 20:14:59.368604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.338 ms 00:21:51.730 [2024-12-16 20:14:59.368631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.730 [2024-12-16 20:14:59.368988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.730 [2024-12-16 20:14:59.369032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:51.730 [2024-12-16 20:14:59.369058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:21:51.730 [2024-12-16 20:14:59.369136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.990 [2024-12-16 20:14:59.440839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.990 [2024-12-16 20:14:59.441011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:51.990 [2024-12-16 20:14:59.441083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.619 ms 00:21:51.990 [2024-12-16 20:14:59.441109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.990 [2024-12-16 20:14:59.468648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.990 [2024-12-16 20:14:59.468815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:51.990 [2024-12-16 20:14:59.468881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.450 ms 00:21:51.991 [2024-12-16 20:14:59.468905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.991 [2024-12-16 20:14:59.470399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.991 [2024-12-16 20:14:59.470555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:51.991 [2024-12-16 20:14:59.470620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:21:51.991 [2024-12-16 20:14:59.470644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.991 [2024-12-16 20:14:59.497694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.991 [2024-12-16 20:14:59.497856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:51.991 [2024-12-16 20:14:59.497921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.984 ms 00:21:51.991 [2024-12-16 20:14:59.497945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.991 [2024-12-16 20:14:59.498189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.991 [2024-12-16 20:14:59.498223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:51.991 [2024-12-16 20:14:59.498249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:51.991 [2024-12-16 20:14:59.498271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.991 [2024-12-16 20:14:59.498410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.991 [2024-12-16 20:14:59.498444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:51.991 [2024-12-16 20:14:59.498469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:51.991 [2024-12-16 20:14:59.498490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.991 [2024-12-16 20:14:59.499676] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4535.969 ms, result 0 00:21:51.991 { 00:21:51.991 "name": "ftl0", 00:21:51.991 "uuid": "695c319e-f996-4308-bf6d-88949793fc08" 00:21:51.991 } 00:21:51.991 20:14:59 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:51.991 20:14:59 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:52.251 20:14:59 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:52.251 20:14:59 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:52.251 20:14:59 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:52.512 /dev/nbd0 00:21:52.512 20:14:59 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:52.512 20:14:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:52.512 20:14:59 -- common/autotest_common.sh@867 -- # local i 00:21:52.512 20:14:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:52.512 20:14:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:52.512 20:14:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:52.512 20:14:59 -- common/autotest_common.sh@871 -- # break 00:21:52.512 20:14:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:52.512 20:14:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:52.512 20:14:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:52.512 1+0 records in 00:21:52.512 1+0 records out 00:21:52.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327197 s, 12.5 MB/s 00:21:52.512 20:14:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:52.512 20:14:59 -- common/autotest_common.sh@884 -- # size=4096 00:21:52.512 20:14:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:52.512 20:14:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:52.512 20:14:59 -- common/autotest_common.sh@887 -- # return 0 00:21:52.512 20:14:59 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:52.512 [2024-12-16 20:15:00.022941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:52.512 [2024-12-16 20:15:00.023048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75732 ] 00:21:52.774 [2024-12-16 20:15:00.172394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.774 [2024-12-16 20:15:00.340987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.161  [2024-12-16T20:15:02.744Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-16T20:15:03.687Z] Copying: 441/1024 [MB] (245 MBps) [2024-12-16T20:15:04.630Z] Copying: 701/1024 [MB] (259 MBps) [2024-12-16T20:15:04.891Z] Copying: 951/1024 [MB] (250 MBps) [2024-12-16T20:15:05.833Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:21:58.193 00:21:58.193 20:15:05 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:00.101 20:15:07 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:00.101 [2024-12-16 20:15:07.534347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:00.101 [2024-12-16 20:15:07.534635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75814 ] 00:22:00.101 [2024-12-16 20:15:07.693809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.362 [2024-12-16 20:15:07.865570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.749  [2024-12-16T20:15:10.333Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-16T20:15:11.275Z] Copying: 36/1024 [MB] (23 MBps) [2024-12-16T20:15:12.219Z] Copying: 57/1024 [MB] (21 MBps) [2024-12-16T20:15:13.163Z] Copying: 78/1024 [MB] (21 MBps) [2024-12-16T20:15:14.107Z] Copying: 99/1024 [MB] (20 MBps) [2024-12-16T20:15:15.492Z] Copying: 122/1024 [MB] (23 MBps) [2024-12-16T20:15:16.435Z] Copying: 143/1024 [MB] (20 MBps) [2024-12-16T20:15:17.378Z] Copying: 163/1024 [MB] (20 MBps) [2024-12-16T20:15:18.322Z] Copying: 186/1024 [MB] (22 MBps) [2024-12-16T20:15:19.266Z] Copying: 211/1024 [MB] (25 MBps) [2024-12-16T20:15:20.213Z] Copying: 237/1024 [MB] (26 MBps) [2024-12-16T20:15:21.155Z] Copying: 265/1024 [MB] (27 MBps) [2024-12-16T20:15:22.107Z] Copying: 285/1024 [MB] (20 MBps) [2024-12-16T20:15:23.139Z] Copying: 314/1024 [MB] (28 MBps) [2024-12-16T20:15:24.078Z] Copying: 333/1024 [MB] (19 MBps) [2024-12-16T20:15:25.464Z] Copying: 360/1024 [MB] (26 MBps) [2024-12-16T20:15:26.407Z] Copying: 394/1024 [MB] (33 MBps) [2024-12-16T20:15:27.350Z] Copying: 417/1024 [MB] (22 MBps) [2024-12-16T20:15:28.292Z] Copying: 444/1024 [MB] (27 MBps) [2024-12-16T20:15:29.235Z] Copying: 470/1024 [MB] (26 MBps) [2024-12-16T20:15:30.177Z] Copying: 497/1024 [MB] (27 MBps) [2024-12-16T20:15:31.122Z] Copying: 518/1024 [MB] (20 MBps) [2024-12-16T20:15:32.509Z] Copying: 545/1024 [MB] (26 MBps) [2024-12-16T20:15:33.081Z] Copying: 575/1024 [MB] (29 MBps) [2024-12-16T20:15:34.477Z] Copying: 598/1024 [MB] (23 MBps) [2024-12-16T20:15:35.419Z] Copying: 620/1024 [MB] (22 MBps) [2024-12-16T20:15:36.360Z] Copying: 646/1024 [MB] (25 MBps) [2024-12-16T20:15:37.304Z] Copying: 673/1024 [MB] (26 MBps) [2024-12-16T20:15:38.247Z] Copying: 708/1024 [MB] (34 MBps) [2024-12-16T20:15:39.190Z] Copying: 739/1024 [MB] (31 MBps) [2024-12-16T20:15:40.133Z] Copying: 769/1024 [MB] (29 MBps) [2024-12-16T20:15:41.517Z] Copying: 798/1024 [MB] (29 MBps) [2024-12-16T20:15:42.087Z] Copying: 830/1024 [MB] (31 MBps) [2024-12-16T20:15:43.471Z] Copying: 861/1024 [MB] (31 MBps) [2024-12-16T20:15:44.414Z] Copying: 891/1024 [MB] (29 MBps) [2024-12-16T20:15:45.348Z] Copying: 920/1024 [MB] (29 MBps) [2024-12-16T20:15:46.284Z] Copying: 952/1024 [MB] (32 MBps) [2024-12-16T20:15:47.224Z] Copying: 988/1024 [MB] (35 MBps) [2024-12-16T20:15:47.224Z] Copying: 1019/1024 [MB] (31 MBps) [2024-12-16T20:15:48.161Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:22:40.521 00:22:40.521 20:15:47 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:40.521 20:15:47 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:40.521 20:15:48 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:40.802 [2024-12-16 20:15:48.184200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.184266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.802 [2024-12-16 20:15:48.184281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:40.802 [2024-12-16 20:15:48.184290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.184321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.802 [2024-12-16 20:15:48.186518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.186543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.802 [2024-12-16 20:15:48.186554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:22:40.802 [2024-12-16 20:15:48.186561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.188622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.188648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.802 [2024-12-16 20:15:48.188663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:22:40.802 [2024-12-16 20:15:48.188670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.203382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.203406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.802 [2024-12-16 20:15:48.203418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.694 ms 00:22:40.802 [2024-12-16 20:15:48.203425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.208049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.208071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:40.802 [2024-12-16 20:15:48.208082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.591 ms 00:22:40.802 [2024-12-16 20:15:48.208091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.227673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.227699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.802 [2024-12-16 20:15:48.227710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.527 ms 00:22:40.802 [2024-12-16 20:15:48.227717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.241071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.241095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.802 [2024-12-16 20:15:48.241107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.323 ms 00:22:40.802 [2024-12-16 20:15:48.241114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.241230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.241239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.802 [2024-12-16 20:15:48.241248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:40.802 [2024-12-16 20:15:48.241255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.260234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.260258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:40.802 [2024-12-16 20:15:48.260268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.954 ms 00:22:40.802 [2024-12-16 20:15:48.260274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.278397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.278422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:40.802 [2024-12-16 20:15:48.278432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.079 ms 00:22:40.802 [2024-12-16 20:15:48.278439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.296566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.296590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.802 [2024-12-16 20:15:48.296600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.097 ms 00:22:40.802 [2024-12-16 20:15:48.296606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.314312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.802 [2024-12-16 20:15:48.314335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.802 [2024-12-16 20:15:48.314344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.647 ms 00:22:40.802 [2024-12-16 20:15:48.314350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.802 [2024-12-16 20:15:48.314380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.802 [2024-12-16 20:15:48.314393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.802 [2024-12-16 20:15:48.314689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.314993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.803 [2024-12-16 20:15:48.315165] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.803 [2024-12-16 20:15:48.315174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 695c319e-f996-4308-bf6d-88949793fc08 00:22:40.803 [2024-12-16 20:15:48.315182] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.803 [2024-12-16 20:15:48.315189] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.803 [2024-12-16 20:15:48.315195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.803 [2024-12-16 20:15:48.315203] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.803 [2024-12-16 20:15:48.315209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.803 [2024-12-16 20:15:48.315218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.803 [2024-12-16 20:15:48.315226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.803 [2024-12-16 20:15:48.315232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.803 [2024-12-16 20:15:48.315238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.803 [2024-12-16 20:15:48.315248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.803 [2024-12-16 20:15:48.315254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.803 [2024-12-16 20:15:48.315262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:22:40.803 [2024-12-16 20:15:48.315269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.325531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.803 [2024-12-16 20:15:48.325555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.803 [2024-12-16 20:15:48.325565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.237 ms 00:22:40.803 [2024-12-16 20:15:48.325572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.325732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.803 [2024-12-16 20:15:48.325740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.803 [2024-12-16 20:15:48.325749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:22:40.803 [2024-12-16 20:15:48.325755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.362705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.803 [2024-12-16 20:15:48.362842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.803 [2024-12-16 20:15:48.362860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.803 [2024-12-16 20:15:48.362867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.362920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.803 [2024-12-16 20:15:48.362927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.803 [2024-12-16 20:15:48.362936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.803 [2024-12-16 20:15:48.362943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.363003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.803 [2024-12-16 20:15:48.363012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.803 [2024-12-16 20:15:48.363020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.803 [2024-12-16 20:15:48.363027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.363043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.803 [2024-12-16 20:15:48.363049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.803 [2024-12-16 20:15:48.363057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.803 [2024-12-16 20:15:48.363064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.803 [2024-12-16 20:15:48.424612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.803 [2024-12-16 20:15:48.424643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.804 [2024-12-16 20:15:48.424654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.804 [2024-12-16 20:15:48.424662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.076 [2024-12-16 20:15:48.448288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.076 [2024-12-16 20:15:48.448384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.076 [2024-12-16 20:15:48.448447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.076 [2024-12-16 20:15:48.448552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:41.076 [2024-12-16 20:15:48.448611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.076 [2024-12-16 20:15:48.448672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.076 [2024-12-16 20:15:48.448729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.076 [2024-12-16 20:15:48.448738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.076 [2024-12-16 20:15:48.448744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.076 [2024-12-16 20:15:48.448867] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.625 ms, result 0 00:22:41.076 true 00:22:41.076 20:15:48 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75568 00:22:41.076 20:15:48 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75568 00:22:41.076 20:15:48 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:41.076 [2024-12-16 20:15:48.533539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:41.076 [2024-12-16 20:15:48.533653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76244 ] 00:22:41.076 [2024-12-16 20:15:48.679624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.335 [2024-12-16 20:15:48.852219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.710  [2024-12-16T20:15:51.285Z] Copying: 252/1024 [MB] (252 MBps) [2024-12-16T20:15:52.220Z] Copying: 507/1024 [MB] (254 MBps) [2024-12-16T20:15:53.156Z] Copying: 762/1024 [MB] (254 MBps) [2024-12-16T20:15:53.156Z] Copying: 1012/1024 [MB] (249 MBps) [2024-12-16T20:15:54.091Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:22:46.451 00:22:46.451 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75568 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:46.451 20:15:53 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:46.451 [2024-12-16 20:15:53.830919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:46.451 [2024-12-16 20:15:53.831183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76302 ] 00:22:46.451 [2024-12-16 20:15:53.978049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.710 [2024-12-16 20:15:54.142339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.968 [2024-12-16 20:15:54.369424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.968 [2024-12-16 20:15:54.369476] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.968 [2024-12-16 20:15:54.430325] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:46.968 [2024-12-16 20:15:54.431023] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:46.968 [2024-12-16 20:15:54.431535] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:47.537 [2024-12-16 20:15:54.881139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.881171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:47.537 [2024-12-16 20:15:54.881182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:47.537 [2024-12-16 20:15:54.881188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.881223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.881230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.537 [2024-12-16 20:15:54.881239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:47.537 [2024-12-16 20:15:54.881245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.881258] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:47.537 [2024-12-16 20:15:54.881844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:47.537 [2024-12-16 20:15:54.881858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.881865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.537 [2024-12-16 20:15:54.881871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:22:47.537 [2024-12-16 20:15:54.881877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.883113] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:47.537 [2024-12-16 20:15:54.893758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.893785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:47.537 [2024-12-16 20:15:54.893794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.647 ms 00:22:47.537 [2024-12-16 20:15:54.893800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.893843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.893854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:47.537 [2024-12-16 20:15:54.893860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:47.537 [2024-12-16 20:15:54.893866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.900119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.900285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.537 [2024-12-16 20:15:54.900311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:22:47.537 [2024-12-16 20:15:54.900318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.900387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.900394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.537 [2024-12-16 20:15:54.900401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:47.537 [2024-12-16 20:15:54.900407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.900443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.900451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:47.537 [2024-12-16 20:15:54.900457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:47.537 [2024-12-16 20:15:54.900463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.900481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.537 [2024-12-16 20:15:54.903589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.903685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.537 [2024-12-16 20:15:54.903697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:22:47.537 [2024-12-16 20:15:54.903703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.903739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.903746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:47.537 [2024-12-16 20:15:54.903752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:47.537 [2024-12-16 20:15:54.903757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.903772] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:47.537 [2024-12-16 20:15:54.903788] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:47.537 [2024-12-16 20:15:54.903815] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:47.537 [2024-12-16 20:15:54.903829] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:47.537 [2024-12-16 20:15:54.903887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:47.537 [2024-12-16 20:15:54.903896] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:47.537 [2024-12-16 20:15:54.903903] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:47.537 [2024-12-16 20:15:54.903911] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:47.537 [2024-12-16 20:15:54.903918] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:47.537 [2024-12-16 20:15:54.903924] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:47.537 [2024-12-16 20:15:54.903930] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:47.537 [2024-12-16 20:15:54.903936] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:47.537 [2024-12-16 20:15:54.903941] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:47.537 [2024-12-16 20:15:54.903949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.903954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:47.537 [2024-12-16 20:15:54.903960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:22:47.537 [2024-12-16 20:15:54.903966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.904066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.537 [2024-12-16 20:15:54.904073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:47.537 [2024-12-16 20:15:54.904078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:22:47.537 [2024-12-16 20:15:54.904084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.537 [2024-12-16 20:15:54.904143] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:47.537 [2024-12-16 20:15:54.904152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:47.537 [2024-12-16 20:15:54.904160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.537 [2024-12-16 20:15:54.904168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.537 [2024-12-16 20:15:54.904174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:47.537 [2024-12-16 20:15:54.904179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:47.537 [2024-12-16 20:15:54.904184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:47.537 [2024-12-16 20:15:54.904190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:47.537 [2024-12-16 20:15:54.904201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:47.537 [2024-12-16 20:15:54.904206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.537 [2024-12-16 20:15:54.904211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:47.537 [2024-12-16 20:15:54.904217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:47.537 [2024-12-16 20:15:54.904228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.537 [2024-12-16 20:15:54.904233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:47.537 [2024-12-16 20:15:54.904248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:47.537 [2024-12-16 20:15:54.904253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.537 [2024-12-16 20:15:54.904258] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:47.537 [2024-12-16 20:15:54.904263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:47.537 [2024-12-16 20:15:54.904268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.537 [2024-12-16 20:15:54.904273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:47.537 [2024-12-16 20:15:54.904277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:47.537 [2024-12-16 20:15:54.904282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:47.537 [2024-12-16 20:15:54.904287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:47.537 [2024-12-16 20:15:54.904292] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:47.537 [2024-12-16 20:15:54.904308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:47.537 [2024-12-16 20:15:54.904313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:47.537 [2024-12-16 20:15:54.904318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:47.538 [2024-12-16 20:15:54.904323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:47.538 [2024-12-16 20:15:54.904328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:47.538 [2024-12-16 20:15:54.904334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:47.538 [2024-12-16 20:15:54.904340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:47.538 [2024-12-16 20:15:54.904344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:47.538 [2024-12-16 20:15:54.904349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:47.538 [2024-12-16 20:15:54.904354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:47.538 [2024-12-16 20:15:54.904360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:47.538 [2024-12-16 20:15:54.904365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:47.538 [2024-12-16 20:15:54.904370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.538 [2024-12-16 20:15:54.904376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:47.538 [2024-12-16 20:15:54.904381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:47.538 [2024-12-16 20:15:54.904385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.538 [2024-12-16 20:15:54.904396] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:47.538 [2024-12-16 20:15:54.904402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:47.538 [2024-12-16 20:15:54.904408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.538 [2024-12-16 20:15:54.904414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.538 [2024-12-16 20:15:54.904421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:47.538 [2024-12-16 20:15:54.904426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:47.538 [2024-12-16 20:15:54.904432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:47.538 [2024-12-16 20:15:54.904437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:47.538 [2024-12-16 20:15:54.904442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:47.538 [2024-12-16 20:15:54.904448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:47.538 [2024-12-16 20:15:54.904455] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:47.538 [2024-12-16 20:15:54.904462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.538 [2024-12-16 20:15:54.904468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:47.538 [2024-12-16 20:15:54.904473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:47.538 [2024-12-16 20:15:54.904479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:47.538 [2024-12-16 20:15:54.904484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:47.538 [2024-12-16 20:15:54.904490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:47.538 [2024-12-16 20:15:54.904496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:47.538 [2024-12-16 20:15:54.904501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:47.538 [2024-12-16 20:15:54.904506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:47.538 [2024-12-16 20:15:54.904517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:47.538 [2024-12-16 20:15:54.904522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:47.538 [2024-12-16 20:15:54.904527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:47.538 [2024-12-16 20:15:54.904533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:47.538 [2024-12-16 20:15:54.904539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:47.538 [2024-12-16 20:15:54.904544] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:47.538 [2024-12-16 20:15:54.904550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.538 [2024-12-16 20:15:54.904559] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:47.538 [2024-12-16 20:15:54.904564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:47.538 [2024-12-16 20:15:54.904569] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:47.538 [2024-12-16 20:15:54.904574] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:47.538 [2024-12-16 20:15:54.904580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.904587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:47.538 [2024-12-16 20:15:54.904594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:22:47.538 [2024-12-16 20:15:54.904599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.918369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.918473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.538 [2024-12-16 20:15:54.918485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.733 ms 00:22:47.538 [2024-12-16 20:15:54.918493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.918562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.918569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:47.538 [2024-12-16 20:15:54.918577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:47.538 [2024-12-16 20:15:54.918584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.962212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.962244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.538 [2024-12-16 20:15:54.962254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.593 ms 00:22:47.538 [2024-12-16 20:15:54.962262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.962296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.962316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.538 [2024-12-16 20:15:54.962324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:47.538 [2024-12-16 20:15:54.962332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.962753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.962771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.538 [2024-12-16 20:15:54.962778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:22:47.538 [2024-12-16 20:15:54.962784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.962880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.962889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.538 [2024-12-16 20:15:54.962895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:47.538 [2024-12-16 20:15:54.962901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.975492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.975515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.538 [2024-12-16 20:15:54.975523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.575 ms 00:22:47.538 [2024-12-16 20:15:54.975529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:54.986204] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:47.538 [2024-12-16 20:15:54.986327] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:47.538 [2024-12-16 20:15:54.986339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:54.986346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:47.538 [2024-12-16 20:15:54.986352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.734 ms 00:22:47.538 [2024-12-16 20:15:54.986358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:55.005614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:55.005711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:47.538 [2024-12-16 20:15:55.005728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.229 ms 00:22:47.538 [2024-12-16 20:15:55.005735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:55.015167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:55.015191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:47.538 [2024-12-16 20:15:55.015198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.404 ms 00:22:47.538 [2024-12-16 20:15:55.015211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:55.024123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:55.024148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:47.538 [2024-12-16 20:15:55.024156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.887 ms 00:22:47.538 [2024-12-16 20:15:55.024162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:55.024454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:55.024485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:47.538 [2024-12-16 20:15:55.024492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:22:47.538 [2024-12-16 20:15:55.024499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:55.073910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.538 [2024-12-16 20:15:55.073949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:47.538 [2024-12-16 20:15:55.073960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.396 ms 00:22:47.538 [2024-12-16 20:15:55.073966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.538 [2024-12-16 20:15:55.082834] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:47.538 [2024-12-16 20:15:55.084984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.085008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:47.539 [2024-12-16 20:15:55.085018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.978 ms 00:22:47.539 [2024-12-16 20:15:55.085025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.085075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.085083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:47.539 [2024-12-16 20:15:55.085090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:47.539 [2024-12-16 20:15:55.085096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.085145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.085154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:47.539 [2024-12-16 20:15:55.085161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:47.539 [2024-12-16 20:15:55.085167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.086235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.086261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:47.539 [2024-12-16 20:15:55.086269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:22:47.539 [2024-12-16 20:15:55.086280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.086314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.086321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:47.539 [2024-12-16 20:15:55.086330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.539 [2024-12-16 20:15:55.086336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.086366] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:47.539 [2024-12-16 20:15:55.086373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.086379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:47.539 [2024-12-16 20:15:55.086385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:47.539 [2024-12-16 20:15:55.086392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.105010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.105039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:47.539 [2024-12-16 20:15:55.105048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.605 ms 00:22:47.539 [2024-12-16 20:15:55.105055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.105109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.539 [2024-12-16 20:15:55.105116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:47.539 [2024-12-16 20:15:55.105123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:47.539 [2024-12-16 20:15:55.105129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.539 [2024-12-16 20:15:55.106019] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.511 ms, result 0 00:22:48.913  [2024-12-16T20:15:57.120Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T20:15:58.495Z] Copying: 24/1024 [MB] (11 MBps) [2024-12-16T20:15:59.437Z] Copying: 36/1024 [MB] (11 MBps) [2024-12-16T20:16:00.380Z] Copying: 47/1024 [MB] (11 MBps) [2024-12-16T20:16:01.324Z] Copying: 57/1024 [MB] (10 MBps) [2024-12-16T20:16:02.267Z] Copying: 86/1024 [MB] (29 MBps) [2024-12-16T20:16:03.208Z] Copying: 140/1024 [MB] (53 MBps) [2024-12-16T20:16:04.152Z] Copying: 194/1024 [MB] (53 MBps) [2024-12-16T20:16:05.538Z] Copying: 248/1024 [MB] (53 MBps) [2024-12-16T20:16:06.480Z] Copying: 302/1024 [MB] (54 MBps) [2024-12-16T20:16:07.422Z] Copying: 357/1024 [MB] (54 MBps) [2024-12-16T20:16:08.357Z] Copying: 401/1024 [MB] (44 MBps) [2024-12-16T20:16:09.297Z] Copying: 413/1024 [MB] (11 MBps) [2024-12-16T20:16:10.239Z] Copying: 427/1024 [MB] (14 MBps) [2024-12-16T20:16:11.182Z] Copying: 442/1024 [MB] (15 MBps) [2024-12-16T20:16:12.127Z] Copying: 453/1024 [MB] (11 MBps) [2024-12-16T20:16:13.515Z] Copying: 468/1024 [MB] (14 MBps) [2024-12-16T20:16:14.130Z] Copying: 486/1024 [MB] (18 MBps) [2024-12-16T20:16:15.517Z] Copying: 502/1024 [MB] (15 MBps) [2024-12-16T20:16:16.460Z] Copying: 517/1024 [MB] (14 MBps) [2024-12-16T20:16:17.404Z] Copying: 535/1024 [MB] (18 MBps) [2024-12-16T20:16:18.348Z] Copying: 552/1024 [MB] (16 MBps) [2024-12-16T20:16:19.291Z] Copying: 569/1024 [MB] (17 MBps) [2024-12-16T20:16:20.231Z] Copying: 585/1024 [MB] (16 MBps) [2024-12-16T20:16:21.177Z] Copying: 602/1024 [MB] (16 MBps) [2024-12-16T20:16:22.121Z] Copying: 620/1024 [MB] (18 MBps) [2024-12-16T20:16:23.507Z] Copying: 636/1024 [MB] (15 MBps) [2024-12-16T20:16:24.450Z] Copying: 653/1024 [MB] (17 MBps) [2024-12-16T20:16:25.393Z] Copying: 698/1024 [MB] (44 MBps) [2024-12-16T20:16:26.336Z] Copying: 714/1024 [MB] (15 MBps) [2024-12-16T20:16:27.278Z] Copying: 728/1024 [MB] (13 MBps) [2024-12-16T20:16:28.222Z] Copying: 751/1024 [MB] (23 MBps) [2024-12-16T20:16:29.166Z] Copying: 766/1024 [MB] (14 MBps) [2024-12-16T20:16:30.551Z] Copying: 802/1024 [MB] (36 MBps) [2024-12-16T20:16:31.123Z] Copying: 830/1024 [MB] (28 MBps) [2024-12-16T20:16:32.511Z] Copying: 884/1024 [MB] (53 MBps) [2024-12-16T20:16:33.454Z] Copying: 915/1024 [MB] (31 MBps) [2024-12-16T20:16:34.397Z] Copying: 934/1024 [MB] (19 MBps) [2024-12-16T20:16:35.340Z] Copying: 952/1024 [MB] (17 MBps) [2024-12-16T20:16:36.313Z] Copying: 965/1024 [MB] (13 MBps) [2024-12-16T20:16:37.255Z] Copying: 985/1024 [MB] (19 MBps) [2024-12-16T20:16:38.197Z] Copying: 1002/1024 [MB] (16 MBps) [2024-12-16T20:16:39.140Z] Copying: 1016/1024 [MB] (14 MBps) [2024-12-16T20:16:39.402Z] Copying: 1048384/1048576 [kB] (7576 kBps) [2024-12-16T20:16:39.402Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-16 20:16:39.278833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.278907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:31.762 [2024-12-16 20:16:39.278925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:31.762 [2024-12-16 20:16:39.278934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.762 [2024-12-16 20:16:39.283045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.762 [2024-12-16 20:16:39.286583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.286629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:31.762 [2024-12-16 20:16:39.286648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.484 ms 00:23:31.762 [2024-12-16 20:16:39.286656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.762 [2024-12-16 20:16:39.299649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.299694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:31.762 [2024-12-16 20:16:39.299706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.808 ms 00:23:31.762 [2024-12-16 20:16:39.299715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.762 [2024-12-16 20:16:39.323834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.323878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:31.762 [2024-12-16 20:16:39.323891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.100 ms 00:23:31.762 [2024-12-16 20:16:39.323899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.762 [2024-12-16 20:16:39.330079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.330274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:31.762 [2024-12-16 20:16:39.330318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:23:31.762 [2024-12-16 20:16:39.330328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.762 [2024-12-16 20:16:39.357092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.357274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:31.762 [2024-12-16 20:16:39.357295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.694 ms 00:23:31.762 [2024-12-16 20:16:39.357326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.762 [2024-12-16 20:16:39.373862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.762 [2024-12-16 20:16:39.373917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:31.762 [2024-12-16 20:16:39.373931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.273 ms 00:23:31.762 [2024-12-16 20:16:39.373939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.024 [2024-12-16 20:16:39.572266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.024 [2024-12-16 20:16:39.572490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.024 [2024-12-16 20:16:39.572513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 198.274 ms 00:23:32.024 [2024-12-16 20:16:39.572521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.024 [2024-12-16 20:16:39.598531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.024 [2024-12-16 20:16:39.598577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:32.024 [2024-12-16 20:16:39.598590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.980 ms 00:23:32.024 [2024-12-16 20:16:39.598597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.024 [2024-12-16 20:16:39.624504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.024 [2024-12-16 20:16:39.624545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:32.024 [2024-12-16 20:16:39.624556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.862 ms 00:23:32.024 [2024-12-16 20:16:39.624564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.024 [2024-12-16 20:16:39.649693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.024 [2024-12-16 20:16:39.649736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:32.024 [2024-12-16 20:16:39.649747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.086 ms 00:23:32.024 [2024-12-16 20:16:39.649754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.287 [2024-12-16 20:16:39.675033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.287 [2024-12-16 20:16:39.675077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:32.287 [2024-12-16 20:16:39.675088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.194 ms 00:23:32.287 [2024-12-16 20:16:39.675095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.287 [2024-12-16 20:16:39.675136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:32.287 [2024-12-16 20:16:39.675151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 89344 / 261120 wr_cnt: 1 state: open 00:23:32.287 [2024-12-16 20:16:39.675163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:32.287 [2024-12-16 20:16:39.675739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:32.288 [2024-12-16 20:16:39.675996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:32.288 [2024-12-16 20:16:39.676010] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 695c319e-f996-4308-bf6d-88949793fc08 00:23:32.288 [2024-12-16 20:16:39.676019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 89344 00:23:32.288 [2024-12-16 20:16:39.676027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 90304 00:23:32.288 [2024-12-16 20:16:39.676036] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 89344 00:23:32.288 [2024-12-16 20:16:39.676051] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0107 00:23:32.288 [2024-12-16 20:16:39.676058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:32.288 [2024-12-16 20:16:39.676071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:32.288 [2024-12-16 20:16:39.676079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:32.288 [2024-12-16 20:16:39.676085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:32.288 [2024-12-16 20:16:39.676092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:32.288 [2024-12-16 20:16:39.676100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.288 [2024-12-16 20:16:39.676107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:32.288 [2024-12-16 20:16:39.676116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:23:32.288 [2024-12-16 20:16:39.676124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.689924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.288 [2024-12-16 20:16:39.689965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:32.288 [2024-12-16 20:16:39.689976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.762 ms 00:23:32.288 [2024-12-16 20:16:39.689985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.690210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.288 [2024-12-16 20:16:39.690220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:32.288 [2024-12-16 20:16:39.690233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:23:32.288 [2024-12-16 20:16:39.690241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.729121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.729170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.288 [2024-12-16 20:16:39.729180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.729188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.729247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.729255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.288 [2024-12-16 20:16:39.729271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.729278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.729376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.729387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.288 [2024-12-16 20:16:39.729410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.729419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.729435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.729444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.288 [2024-12-16 20:16:39.729452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.729464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.810874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.810926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.288 [2024-12-16 20:16:39.810938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.810947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.843427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.288 [2024-12-16 20:16:39.843446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.843454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.843535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.288 [2024-12-16 20:16:39.843544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.843552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.843601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.288 [2024-12-16 20:16:39.843610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.843618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.843736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.288 [2024-12-16 20:16:39.843744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.843752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.843793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:32.288 [2024-12-16 20:16:39.843801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.843810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.288 [2024-12-16 20:16:39.843872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.288 [2024-12-16 20:16:39.843881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.288 [2024-12-16 20:16:39.843889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.288 [2024-12-16 20:16:39.843939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.289 [2024-12-16 20:16:39.843957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.289 [2024-12-16 20:16:39.843965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.289 [2024-12-16 20:16:39.843973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.289 [2024-12-16 20:16:39.844107] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.244 ms, result 0 00:23:33.674 00:23:33.674 00:23:33.935 20:16:41 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:35.846 20:16:43 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:36.106 [2024-12-16 20:16:43.509924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:36.106 [2024-12-16 20:16:43.510018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76818 ] 00:23:36.106 [2024-12-16 20:16:43.669660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.368 [2024-12-16 20:16:43.878283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.629 [2024-12-16 20:16:44.160658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.629 [2024-12-16 20:16:44.160893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.891 [2024-12-16 20:16:44.313070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.313124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.891 [2024-12-16 20:16:44.313138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.891 [2024-12-16 20:16:44.313149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.313200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.313210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.891 [2024-12-16 20:16:44.313218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:36.891 [2024-12-16 20:16:44.313225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.313244] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.891 [2024-12-16 20:16:44.314011] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.891 [2024-12-16 20:16:44.314034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.314043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.891 [2024-12-16 20:16:44.314051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:23:36.891 [2024-12-16 20:16:44.314059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.315578] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.891 [2024-12-16 20:16:44.329331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.329380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.891 [2024-12-16 20:16:44.329393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.755 ms 00:23:36.891 [2024-12-16 20:16:44.329401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.329473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.329482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.891 [2024-12-16 20:16:44.329491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:36.891 [2024-12-16 20:16:44.329498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.337117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.337160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.891 [2024-12-16 20:16:44.337170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.543 ms 00:23:36.891 [2024-12-16 20:16:44.337178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.337270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.337280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.891 [2024-12-16 20:16:44.337288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:36.891 [2024-12-16 20:16:44.337322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.337370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.337380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.891 [2024-12-16 20:16:44.337389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:36.891 [2024-12-16 20:16:44.337396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.337426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.891 [2024-12-16 20:16:44.341441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.341480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.891 [2024-12-16 20:16:44.341491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.027 ms 00:23:36.891 [2024-12-16 20:16:44.341499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.341534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.891 [2024-12-16 20:16:44.341543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.891 [2024-12-16 20:16:44.341551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:36.891 [2024-12-16 20:16:44.341562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.891 [2024-12-16 20:16:44.341609] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.891 [2024-12-16 20:16:44.341630] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:36.891 [2024-12-16 20:16:44.341665] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.891 [2024-12-16 20:16:44.341682] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:36.891 [2024-12-16 20:16:44.341756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:36.891 [2024-12-16 20:16:44.341767] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.891 [2024-12-16 20:16:44.341780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:36.892 [2024-12-16 20:16:44.341790] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.892 [2024-12-16 20:16:44.341800] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.892 [2024-12-16 20:16:44.341808] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.892 [2024-12-16 20:16:44.341816] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.892 [2024-12-16 20:16:44.341824] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:36.892 [2024-12-16 20:16:44.341831] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:36.892 [2024-12-16 20:16:44.341840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.892 [2024-12-16 20:16:44.341848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.892 [2024-12-16 20:16:44.341856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:23:36.892 [2024-12-16 20:16:44.341862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.892 [2024-12-16 20:16:44.341925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.892 [2024-12-16 20:16:44.341933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.892 [2024-12-16 20:16:44.341941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:36.892 [2024-12-16 20:16:44.341948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.892 [2024-12-16 20:16:44.342019] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.892 [2024-12-16 20:16:44.342029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.892 [2024-12-16 20:16:44.342037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.892 [2024-12-16 20:16:44.342059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.892 [2024-12-16 20:16:44.342081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.892 [2024-12-16 20:16:44.342095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.892 [2024-12-16 20:16:44.342103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.892 [2024-12-16 20:16:44.342109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.892 [2024-12-16 20:16:44.342116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.892 [2024-12-16 20:16:44.342123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:36.892 [2024-12-16 20:16:44.342130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342145] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.892 [2024-12-16 20:16:44.342152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:36.892 [2024-12-16 20:16:44.342159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:36.892 [2024-12-16 20:16:44.342173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:36.892 [2024-12-16 20:16:44.342179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.892 [2024-12-16 20:16:44.342194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.892 [2024-12-16 20:16:44.342214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.892 [2024-12-16 20:16:44.342233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.892 [2024-12-16 20:16:44.342253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.892 [2024-12-16 20:16:44.342271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.892 [2024-12-16 20:16:44.342284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.892 [2024-12-16 20:16:44.342290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:36.892 [2024-12-16 20:16:44.342324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.892 [2024-12-16 20:16:44.342331] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.892 [2024-12-16 20:16:44.342342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.892 [2024-12-16 20:16:44.342349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.892 [2024-12-16 20:16:44.342370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.892 [2024-12-16 20:16:44.342378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.892 [2024-12-16 20:16:44.342384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.892 [2024-12-16 20:16:44.342391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.892 [2024-12-16 20:16:44.342398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.892 [2024-12-16 20:16:44.342405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.892 [2024-12-16 20:16:44.342412] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.892 [2024-12-16 20:16:44.342422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.892 [2024-12-16 20:16:44.342431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.892 [2024-12-16 20:16:44.342438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:36.892 [2024-12-16 20:16:44.342446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:36.892 [2024-12-16 20:16:44.342453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:36.892 [2024-12-16 20:16:44.342460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:36.892 [2024-12-16 20:16:44.342468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:36.892 [2024-12-16 20:16:44.342475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:36.892 [2024-12-16 20:16:44.342482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:36.892 [2024-12-16 20:16:44.342489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:36.892 [2024-12-16 20:16:44.342496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:36.892 [2024-12-16 20:16:44.342503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:36.892 [2024-12-16 20:16:44.342510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:36.892 [2024-12-16 20:16:44.342518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:36.892 [2024-12-16 20:16:44.342525] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.892 [2024-12-16 20:16:44.342534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.892 [2024-12-16 20:16:44.342542] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.892 [2024-12-16 20:16:44.342549] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.892 [2024-12-16 20:16:44.342556] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.892 [2024-12-16 20:16:44.342564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.892 [2024-12-16 20:16:44.342571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.892 [2024-12-16 20:16:44.342578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.892 [2024-12-16 20:16:44.342586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:23:36.892 [2024-12-16 20:16:44.342593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.892 [2024-12-16 20:16:44.360362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.892 [2024-12-16 20:16:44.360409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.892 [2024-12-16 20:16:44.360422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.727 ms 00:23:36.892 [2024-12-16 20:16:44.360437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.892 [2024-12-16 20:16:44.360531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.892 [2024-12-16 20:16:44.360541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.892 [2024-12-16 20:16:44.360551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:36.892 [2024-12-16 20:16:44.360559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.892 [2024-12-16 20:16:44.403488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.892 [2024-12-16 20:16:44.403544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.892 [2024-12-16 20:16:44.403557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.874 ms 00:23:36.893 [2024-12-16 20:16:44.403566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.403615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.403625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.893 [2024-12-16 20:16:44.403634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:36.893 [2024-12-16 20:16:44.403642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.404186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.404222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.893 [2024-12-16 20:16:44.404233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:23:36.893 [2024-12-16 20:16:44.404247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.404417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.404428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.893 [2024-12-16 20:16:44.404437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:23:36.893 [2024-12-16 20:16:44.404445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.420971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.421016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.893 [2024-12-16 20:16:44.421027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.501 ms 00:23:36.893 [2024-12-16 20:16:44.421035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.435587] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:36.893 [2024-12-16 20:16:44.435638] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.893 [2024-12-16 20:16:44.435652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.435661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.893 [2024-12-16 20:16:44.435671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.510 ms 00:23:36.893 [2024-12-16 20:16:44.435678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.461764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.461826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.893 [2024-12-16 20:16:44.461838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.033 ms 00:23:36.893 [2024-12-16 20:16:44.461846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.474870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.474915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.893 [2024-12-16 20:16:44.474926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.969 ms 00:23:36.893 [2024-12-16 20:16:44.474934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.487831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.487877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.893 [2024-12-16 20:16:44.487899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.851 ms 00:23:36.893 [2024-12-16 20:16:44.487906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.893 [2024-12-16 20:16:44.488325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.893 [2024-12-16 20:16:44.488362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.893 [2024-12-16 20:16:44.488371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:36.893 [2024-12-16 20:16:44.488379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.554493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.554552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:37.154 [2024-12-16 20:16:44.554567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.095 ms 00:23:37.154 [2024-12-16 20:16:44.554576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.566615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:37.154 [2024-12-16 20:16:44.569676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.569724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:37.154 [2024-12-16 20:16:44.569737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.033 ms 00:23:37.154 [2024-12-16 20:16:44.569752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.569827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.569839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:37.154 [2024-12-16 20:16:44.569849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:37.154 [2024-12-16 20:16:44.569857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.571212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.571262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:37.154 [2024-12-16 20:16:44.571275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:23:37.154 [2024-12-16 20:16:44.571283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.572682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.572863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:37.154 [2024-12-16 20:16:44.572884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:23:37.154 [2024-12-16 20:16:44.572894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.572935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.572944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:37.154 [2024-12-16 20:16:44.572961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:37.154 [2024-12-16 20:16:44.572969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.573006] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:37.154 [2024-12-16 20:16:44.573016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.573028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:37.154 [2024-12-16 20:16:44.573037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:37.154 [2024-12-16 20:16:44.573045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.599433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.599609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:37.154 [2024-12-16 20:16:44.599632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.368 ms 00:23:37.154 [2024-12-16 20:16:44.599640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.599724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.154 [2024-12-16 20:16:44.599734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:37.154 [2024-12-16 20:16:44.599744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:37.154 [2024-12-16 20:16:44.599751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.154 [2024-12-16 20:16:44.606160] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 290.231 ms, result 0 00:23:38.541  [2024-12-16T20:16:47.124Z] Copying: 1080/1048576 [kB] (1080 kBps) [2024-12-16T20:16:48.068Z] Copying: 4928/1048576 [kB] (3848 kBps) [2024-12-16T20:16:49.011Z] Copying: 33/1024 [MB] (28 MBps) [2024-12-16T20:16:49.954Z] Copying: 49/1024 [MB] (16 MBps) [2024-12-16T20:16:50.899Z] Copying: 66/1024 [MB] (16 MBps) [2024-12-16T20:16:51.843Z] Copying: 81/1024 [MB] (15 MBps) [2024-12-16T20:16:53.229Z] Copying: 98/1024 [MB] (16 MBps) [2024-12-16T20:16:53.800Z] Copying: 123/1024 [MB] (25 MBps) [2024-12-16T20:16:55.185Z] Copying: 144/1024 [MB] (20 MBps) [2024-12-16T20:16:56.128Z] Copying: 173/1024 [MB] (28 MBps) [2024-12-16T20:16:57.070Z] Copying: 202/1024 [MB] (29 MBps) [2024-12-16T20:16:58.015Z] Copying: 234/1024 [MB] (31 MBps) [2024-12-16T20:16:58.958Z] Copying: 253/1024 [MB] (19 MBps) [2024-12-16T20:16:59.956Z] Copying: 280/1024 [MB] (27 MBps) [2024-12-16T20:17:00.898Z] Copying: 310/1024 [MB] (30 MBps) [2024-12-16T20:17:01.842Z] Copying: 347/1024 [MB] (36 MBps) [2024-12-16T20:17:02.787Z] Copying: 367/1024 [MB] (20 MBps) [2024-12-16T20:17:04.174Z] Copying: 383/1024 [MB] (16 MBps) [2024-12-16T20:17:05.118Z] Copying: 406/1024 [MB] (22 MBps) [2024-12-16T20:17:06.060Z] Copying: 438/1024 [MB] (31 MBps) [2024-12-16T20:17:07.004Z] Copying: 470/1024 [MB] (32 MBps) [2024-12-16T20:17:07.948Z] Copying: 497/1024 [MB] (26 MBps) [2024-12-16T20:17:08.891Z] Copying: 525/1024 [MB] (28 MBps) [2024-12-16T20:17:09.835Z] Copying: 554/1024 [MB] (28 MBps) [2024-12-16T20:17:11.221Z] Copying: 584/1024 [MB] (30 MBps) [2024-12-16T20:17:11.791Z] Copying: 614/1024 [MB] (29 MBps) [2024-12-16T20:17:13.177Z] Copying: 645/1024 [MB] (30 MBps) [2024-12-16T20:17:14.120Z] Copying: 692/1024 [MB] (46 MBps) [2024-12-16T20:17:15.062Z] Copying: 722/1024 [MB] (30 MBps) [2024-12-16T20:17:16.007Z] Copying: 741/1024 [MB] (18 MBps) [2024-12-16T20:17:16.950Z] Copying: 770/1024 [MB] (28 MBps) [2024-12-16T20:17:17.896Z] Copying: 797/1024 [MB] (27 MBps) [2024-12-16T20:17:18.840Z] Copying: 818/1024 [MB] (21 MBps) [2024-12-16T20:17:20.230Z] Copying: 845/1024 [MB] (26 MBps) [2024-12-16T20:17:20.802Z] Copying: 889/1024 [MB] (43 MBps) [2024-12-16T20:17:22.210Z] Copying: 919/1024 [MB] (30 MBps) [2024-12-16T20:17:23.153Z] Copying: 955/1024 [MB] (35 MBps) [2024-12-16T20:17:24.096Z] Copying: 977/1024 [MB] (21 MBps) [2024-12-16T20:17:24.670Z] Copying: 1005/1024 [MB] (27 MBps) [2024-12-16T20:17:24.670Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-16 20:17:24.482090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.482176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:17.030 [2024-12-16 20:17:24.482194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:17.030 [2024-12-16 20:17:24.482203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.482231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:17.030 [2024-12-16 20:17:24.486129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.486174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:17.030 [2024-12-16 20:17:24.486187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.880 ms 00:24:17.030 [2024-12-16 20:17:24.486195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.486492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.486506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:17.030 [2024-12-16 20:17:24.486516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:17.030 [2024-12-16 20:17:24.486525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.501742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.501805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:17.030 [2024-12-16 20:17:24.501819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.199 ms 00:24:17.030 [2024-12-16 20:17:24.501828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.507977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.508024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:17.030 [2024-12-16 20:17:24.508036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:24:17.030 [2024-12-16 20:17:24.508043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.535109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.535155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:17.030 [2024-12-16 20:17:24.535168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.989 ms 00:24:17.030 [2024-12-16 20:17:24.535175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.551739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.551784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:17.030 [2024-12-16 20:17:24.551797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.520 ms 00:24:17.030 [2024-12-16 20:17:24.551805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.560420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.560462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:17.030 [2024-12-16 20:17:24.560473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.567 ms 00:24:17.030 [2024-12-16 20:17:24.560487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.586149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.586347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:17.030 [2024-12-16 20:17:24.586371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.647 ms 00:24:17.030 [2024-12-16 20:17:24.586378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.611636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.611684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:17.030 [2024-12-16 20:17:24.611695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.220 ms 00:24:17.030 [2024-12-16 20:17:24.611714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.636670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.636714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:17.030 [2024-12-16 20:17:24.636725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.912 ms 00:24:17.030 [2024-12-16 20:17:24.636731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.661475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.030 [2024-12-16 20:17:24.661651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:17.030 [2024-12-16 20:17:24.661671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.660 ms 00:24:17.030 [2024-12-16 20:17:24.661678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.030 [2024-12-16 20:17:24.661828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:17.030 [2024-12-16 20:17:24.661861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:17.030 [2024-12-16 20:17:24.661873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:17.030 [2024-12-16 20:17:24.661882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:17.030 [2024-12-16 20:17:24.661952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.661959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.661966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.661974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.661981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.661988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.661996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:17.031 [2024-12-16 20:17:24.662664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:17.032 [2024-12-16 20:17:24.662672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:17.032 [2024-12-16 20:17:24.662688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:17.032 [2024-12-16 20:17:24.662697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 695c319e-f996-4308-bf6d-88949793fc08 00:24:17.032 [2024-12-16 20:17:24.662705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:17.032 [2024-12-16 20:17:24.662719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 177088 00:24:17.032 [2024-12-16 20:17:24.662726] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 175104 00:24:17.032 [2024-12-16 20:17:24.662735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0113 00:24:17.032 [2024-12-16 20:17:24.662744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:17.032 [2024-12-16 20:17:24.662753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:17.032 [2024-12-16 20:17:24.662760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:17.032 [2024-12-16 20:17:24.662767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:17.032 [2024-12-16 20:17:24.662781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:17.032 [2024-12-16 20:17:24.662789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.032 [2024-12-16 20:17:24.662798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:17.032 [2024-12-16 20:17:24.662806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:24:17.032 [2024-12-16 20:17:24.662815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.675953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.293 [2024-12-16 20:17:24.675994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:17.293 [2024-12-16 20:17:24.676006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.101 ms 00:24:17.293 [2024-12-16 20:17:24.676013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.676246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.293 [2024-12-16 20:17:24.676256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:17.293 [2024-12-16 20:17:24.676264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:24:17.293 [2024-12-16 20:17:24.676277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.715221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.715268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:17.293 [2024-12-16 20:17:24.715279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.715287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.715364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.715373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:17.293 [2024-12-16 20:17:24.715381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.715389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.715484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.715496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:17.293 [2024-12-16 20:17:24.715504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.715512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.715528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.715536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:17.293 [2024-12-16 20:17:24.715543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.715551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.795747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.795796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:17.293 [2024-12-16 20:17:24.795809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.795817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.827727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.827772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:17.293 [2024-12-16 20:17:24.827784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.827793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.827869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.827879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:17.293 [2024-12-16 20:17:24.827887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.827901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.293 [2024-12-16 20:17:24.827943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.293 [2024-12-16 20:17:24.827953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:17.293 [2024-12-16 20:17:24.827961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.293 [2024-12-16 20:17:24.827970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.294 [2024-12-16 20:17:24.828067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.294 [2024-12-16 20:17:24.828081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:17.294 [2024-12-16 20:17:24.828089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.294 [2024-12-16 20:17:24.828097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.294 [2024-12-16 20:17:24.828133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.294 [2024-12-16 20:17:24.828142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:17.294 [2024-12-16 20:17:24.828150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.294 [2024-12-16 20:17:24.828159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.294 [2024-12-16 20:17:24.828201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.294 [2024-12-16 20:17:24.828213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:17.294 [2024-12-16 20:17:24.828222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.294 [2024-12-16 20:17:24.828229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.294 [2024-12-16 20:17:24.828279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.294 [2024-12-16 20:17:24.828288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:17.294 [2024-12-16 20:17:24.828296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.294 [2024-12-16 20:17:24.828338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.294 [2024-12-16 20:17:24.828484] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.363 ms, result 0 00:24:18.241 00:24:18.241 00:24:18.241 20:17:25 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:20.787 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:20.787 20:17:27 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:20.787 [2024-12-16 20:17:27.932566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:20.787 [2024-12-16 20:17:27.932684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77276 ] 00:24:20.787 [2024-12-16 20:17:28.085285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.787 [2024-12-16 20:17:28.311181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.047 [2024-12-16 20:17:28.598861] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.047 [2024-12-16 20:17:28.598942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.309 [2024-12-16 20:17:28.755366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.755426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:21.309 [2024-12-16 20:17:28.755441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:21.309 [2024-12-16 20:17:28.755453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.755506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.755517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:21.309 [2024-12-16 20:17:28.755526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:21.309 [2024-12-16 20:17:28.755534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.755554] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:21.309 [2024-12-16 20:17:28.756335] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:21.309 [2024-12-16 20:17:28.756355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.756364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:21.309 [2024-12-16 20:17:28.756373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:24:21.309 [2024-12-16 20:17:28.756382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.758122] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:21.309 [2024-12-16 20:17:28.772793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.772844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:21.309 [2024-12-16 20:17:28.772857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:24:21.309 [2024-12-16 20:17:28.772865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.772954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.772964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:21.309 [2024-12-16 20:17:28.772974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:21.309 [2024-12-16 20:17:28.772982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.780923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.781107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.309 [2024-12-16 20:17:28.781125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.864 ms 00:24:21.309 [2024-12-16 20:17:28.781133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.781238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.781248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.309 [2024-12-16 20:17:28.781257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:21.309 [2024-12-16 20:17:28.781265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.309 [2024-12-16 20:17:28.781341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.309 [2024-12-16 20:17:28.781352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:21.310 [2024-12-16 20:17:28.781361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:21.310 [2024-12-16 20:17:28.781368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.310 [2024-12-16 20:17:28.781400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:21.310 [2024-12-16 20:17:28.785519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.310 [2024-12-16 20:17:28.785555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.310 [2024-12-16 20:17:28.785566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.133 ms 00:24:21.310 [2024-12-16 20:17:28.785574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.310 [2024-12-16 20:17:28.785612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.310 [2024-12-16 20:17:28.785621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:21.310 [2024-12-16 20:17:28.785629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:21.310 [2024-12-16 20:17:28.785640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.310 [2024-12-16 20:17:28.785691] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:21.310 [2024-12-16 20:17:28.785713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:21.310 [2024-12-16 20:17:28.785748] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:21.310 [2024-12-16 20:17:28.785764] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:21.310 [2024-12-16 20:17:28.785840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:21.310 [2024-12-16 20:17:28.785851] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:21.310 [2024-12-16 20:17:28.785865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:21.310 [2024-12-16 20:17:28.785875] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:21.310 [2024-12-16 20:17:28.785884] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:21.310 [2024-12-16 20:17:28.785893] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:21.310 [2024-12-16 20:17:28.785901] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:21.310 [2024-12-16 20:17:28.785908] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:21.310 [2024-12-16 20:17:28.785915] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:21.310 [2024-12-16 20:17:28.785924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.310 [2024-12-16 20:17:28.785931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:21.310 [2024-12-16 20:17:28.785939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:24:21.310 [2024-12-16 20:17:28.785947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.310 [2024-12-16 20:17:28.786010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.310 [2024-12-16 20:17:28.786019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:21.310 [2024-12-16 20:17:28.786026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:21.310 [2024-12-16 20:17:28.786034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.310 [2024-12-16 20:17:28.786105] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:21.310 [2024-12-16 20:17:28.786115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:21.310 [2024-12-16 20:17:28.786124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:21.310 [2024-12-16 20:17:28.786147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:21.310 [2024-12-16 20:17:28.786169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:21.310 [2024-12-16 20:17:28.786185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:21.310 [2024-12-16 20:17:28.786192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:21.310 [2024-12-16 20:17:28.786200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:21.310 [2024-12-16 20:17:28.786207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:21.310 [2024-12-16 20:17:28.786214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:21.310 [2024-12-16 20:17:28.786221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:21.310 [2024-12-16 20:17:28.786244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:21.310 [2024-12-16 20:17:28.786250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:21.310 [2024-12-16 20:17:28.786263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:21.310 [2024-12-16 20:17:28.786270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:21.310 [2024-12-16 20:17:28.786284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786326] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:21.310 [2024-12-16 20:17:28.786334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:21.310 [2024-12-16 20:17:28.786354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:21.310 [2024-12-16 20:17:28.786375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:21.310 [2024-12-16 20:17:28.786396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:21.310 [2024-12-16 20:17:28.786411] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:21.310 [2024-12-16 20:17:28.786417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:21.310 [2024-12-16 20:17:28.786424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:21.310 [2024-12-16 20:17:28.786430] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:21.310 [2024-12-16 20:17:28.786441] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:21.310 [2024-12-16 20:17:28.786449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.310 [2024-12-16 20:17:28.786469] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:21.310 [2024-12-16 20:17:28.786476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:21.310 [2024-12-16 20:17:28.786483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:21.310 [2024-12-16 20:17:28.786490] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:21.310 [2024-12-16 20:17:28.786496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:21.310 [2024-12-16 20:17:28.786503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:21.310 [2024-12-16 20:17:28.786511] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:21.310 [2024-12-16 20:17:28.786521] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:21.310 [2024-12-16 20:17:28.786529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:21.310 [2024-12-16 20:17:28.786537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:21.310 [2024-12-16 20:17:28.786544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:21.310 [2024-12-16 20:17:28.786551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:21.310 [2024-12-16 20:17:28.786558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:21.310 [2024-12-16 20:17:28.786565] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:21.310 [2024-12-16 20:17:28.786573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:21.310 [2024-12-16 20:17:28.786580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:21.310 [2024-12-16 20:17:28.786587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:21.310 [2024-12-16 20:17:28.786594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:21.310 [2024-12-16 20:17:28.786601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:21.310 [2024-12-16 20:17:28.786608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:21.310 [2024-12-16 20:17:28.786616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:21.310 [2024-12-16 20:17:28.786623] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:21.310 [2024-12-16 20:17:28.786631] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:21.310 [2024-12-16 20:17:28.786650] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:21.311 [2024-12-16 20:17:28.786657] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:21.311 [2024-12-16 20:17:28.786664] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:21.311 [2024-12-16 20:17:28.786671] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:21.311 [2024-12-16 20:17:28.786678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.786686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:21.311 [2024-12-16 20:17:28.786694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:24:21.311 [2024-12-16 20:17:28.786702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.804828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.804879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.311 [2024-12-16 20:17:28.804892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:24:21.311 [2024-12-16 20:17:28.804907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.805004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.805013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:21.311 [2024-12-16 20:17:28.805021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:21.311 [2024-12-16 20:17:28.805029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.853013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.853209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.311 [2024-12-16 20:17:28.853231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.930 ms 00:24:21.311 [2024-12-16 20:17:28.853240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.853290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.853325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.311 [2024-12-16 20:17:28.853334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:21.311 [2024-12-16 20:17:28.853343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.853881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.853903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.311 [2024-12-16 20:17:28.853914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:24:21.311 [2024-12-16 20:17:28.853927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.854063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.854073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.311 [2024-12-16 20:17:28.854081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:24:21.311 [2024-12-16 20:17:28.854089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.870584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.870628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.311 [2024-12-16 20:17:28.870639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.474 ms 00:24:21.311 [2024-12-16 20:17:28.870647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.884805] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:21.311 [2024-12-16 20:17:28.884850] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:21.311 [2024-12-16 20:17:28.884862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.884870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:21.311 [2024-12-16 20:17:28.884881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.106 ms 00:24:21.311 [2024-12-16 20:17:28.884888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.910851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.910901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:21.311 [2024-12-16 20:17:28.910912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.912 ms 00:24:21.311 [2024-12-16 20:17:28.910921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.923988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.924033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:21.311 [2024-12-16 20:17:28.924046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.013 ms 00:24:21.311 [2024-12-16 20:17:28.924054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.936807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.936858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:21.311 [2024-12-16 20:17:28.936870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:24:21.311 [2024-12-16 20:17:28.936877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.311 [2024-12-16 20:17:28.937263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.311 [2024-12-16 20:17:28.937276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:21.311 [2024-12-16 20:17:28.937287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:24:21.311 [2024-12-16 20:17:28.937295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.572 [2024-12-16 20:17:29.003059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.572 [2024-12-16 20:17:29.003288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:21.572 [2024-12-16 20:17:29.003337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.719 ms 00:24:21.572 [2024-12-16 20:17:29.003346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.572 [2024-12-16 20:17:29.015239] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:21.572 [2024-12-16 20:17:29.018508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.572 [2024-12-16 20:17:29.018553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:21.572 [2024-12-16 20:17:29.018566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.064 ms 00:24:21.572 [2024-12-16 20:17:29.018580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.572 [2024-12-16 20:17:29.018658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.572 [2024-12-16 20:17:29.018669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:21.572 [2024-12-16 20:17:29.018679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:21.572 [2024-12-16 20:17:29.018687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.572 [2024-12-16 20:17:29.019538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.572 [2024-12-16 20:17:29.019576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:21.572 [2024-12-16 20:17:29.019588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:24:21.572 [2024-12-16 20:17:29.019597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.572 [2024-12-16 20:17:29.020951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.572 [2024-12-16 20:17:29.020995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:21.572 [2024-12-16 20:17:29.021006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:24:21.572 [2024-12-16 20:17:29.021013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.573 [2024-12-16 20:17:29.021049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.573 [2024-12-16 20:17:29.021057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:21.573 [2024-12-16 20:17:29.021071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:21.573 [2024-12-16 20:17:29.021079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.573 [2024-12-16 20:17:29.021116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:21.573 [2024-12-16 20:17:29.021126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.573 [2024-12-16 20:17:29.021138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:21.573 [2024-12-16 20:17:29.021146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:21.573 [2024-12-16 20:17:29.021154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.573 [2024-12-16 20:17:29.047456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.573 [2024-12-16 20:17:29.047506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:21.573 [2024-12-16 20:17:29.047521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.283 ms 00:24:21.573 [2024-12-16 20:17:29.047529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.573 [2024-12-16 20:17:29.047620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.573 [2024-12-16 20:17:29.047631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:21.573 [2024-12-16 20:17:29.047641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:21.573 [2024-12-16 20:17:29.047649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.573 [2024-12-16 20:17:29.048987] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.146 ms, result 0 00:24:22.958  [2024-12-16T20:17:31.538Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-16T20:17:32.480Z] Copying: 31/1024 [MB] (11 MBps) [2024-12-16T20:17:33.421Z] Copying: 43/1024 [MB] (12 MBps) [2024-12-16T20:17:34.363Z] Copying: 54/1024 [MB] (10 MBps) [2024-12-16T20:17:35.307Z] Copying: 76/1024 [MB] (22 MBps) [2024-12-16T20:17:36.248Z] Copying: 94/1024 [MB] (18 MBps) [2024-12-16T20:17:37.634Z] Copying: 108/1024 [MB] (14 MBps) [2024-12-16T20:17:38.576Z] Copying: 128/1024 [MB] (20 MBps) [2024-12-16T20:17:39.519Z] Copying: 147/1024 [MB] (18 MBps) [2024-12-16T20:17:40.461Z] Copying: 163/1024 [MB] (16 MBps) [2024-12-16T20:17:41.403Z] Copying: 180/1024 [MB] (16 MBps) [2024-12-16T20:17:42.346Z] Copying: 198/1024 [MB] (17 MBps) [2024-12-16T20:17:43.287Z] Copying: 213/1024 [MB] (14 MBps) [2024-12-16T20:17:44.230Z] Copying: 238/1024 [MB] (25 MBps) [2024-12-16T20:17:45.716Z] Copying: 257/1024 [MB] (18 MBps) [2024-12-16T20:17:46.287Z] Copying: 276/1024 [MB] (18 MBps) [2024-12-16T20:17:47.229Z] Copying: 294/1024 [MB] (18 MBps) [2024-12-16T20:17:48.613Z] Copying: 310/1024 [MB] (15 MBps) [2024-12-16T20:17:49.554Z] Copying: 325/1024 [MB] (15 MBps) [2024-12-16T20:17:50.495Z] Copying: 336/1024 [MB] (10 MBps) [2024-12-16T20:17:51.436Z] Copying: 346/1024 [MB] (10 MBps) [2024-12-16T20:17:52.377Z] Copying: 357/1024 [MB] (10 MBps) [2024-12-16T20:17:53.326Z] Copying: 374/1024 [MB] (16 MBps) [2024-12-16T20:17:54.268Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-16T20:17:55.652Z] Copying: 403/1024 [MB] (18 MBps) [2024-12-16T20:17:56.595Z] Copying: 422/1024 [MB] (19 MBps) [2024-12-16T20:17:57.538Z] Copying: 441/1024 [MB] (19 MBps) [2024-12-16T20:17:58.480Z] Copying: 458/1024 [MB] (17 MBps) [2024-12-16T20:17:59.422Z] Copying: 477/1024 [MB] (18 MBps) [2024-12-16T20:18:00.364Z] Copying: 497/1024 [MB] (19 MBps) [2024-12-16T20:18:01.307Z] Copying: 520/1024 [MB] (23 MBps) [2024-12-16T20:18:02.250Z] Copying: 545/1024 [MB] (25 MBps) [2024-12-16T20:18:03.635Z] Copying: 565/1024 [MB] (20 MBps) [2024-12-16T20:18:04.579Z] Copying: 585/1024 [MB] (19 MBps) [2024-12-16T20:18:05.521Z] Copying: 604/1024 [MB] (18 MBps) [2024-12-16T20:18:06.465Z] Copying: 626/1024 [MB] (22 MBps) [2024-12-16T20:18:07.406Z] Copying: 647/1024 [MB] (20 MBps) [2024-12-16T20:18:08.400Z] Copying: 665/1024 [MB] (17 MBps) [2024-12-16T20:18:09.361Z] Copying: 680/1024 [MB] (15 MBps) [2024-12-16T20:18:10.305Z] Copying: 690/1024 [MB] (10 MBps) [2024-12-16T20:18:11.249Z] Copying: 701/1024 [MB] (10 MBps) [2024-12-16T20:18:12.636Z] Copying: 712/1024 [MB] (10 MBps) [2024-12-16T20:18:13.580Z] Copying: 731/1024 [MB] (19 MBps) [2024-12-16T20:18:14.524Z] Copying: 748/1024 [MB] (16 MBps) [2024-12-16T20:18:15.469Z] Copying: 761/1024 [MB] (13 MBps) [2024-12-16T20:18:16.413Z] Copying: 782/1024 [MB] (21 MBps) [2024-12-16T20:18:17.357Z] Copying: 798/1024 [MB] (15 MBps) [2024-12-16T20:18:18.300Z] Copying: 811/1024 [MB] (13 MBps) [2024-12-16T20:18:19.243Z] Copying: 828/1024 [MB] (16 MBps) [2024-12-16T20:18:20.630Z] Copying: 851/1024 [MB] (23 MBps) [2024-12-16T20:18:21.573Z] Copying: 872/1024 [MB] (20 MBps) [2024-12-16T20:18:22.516Z] Copying: 891/1024 [MB] (19 MBps) [2024-12-16T20:18:23.460Z] Copying: 905/1024 [MB] (14 MBps) [2024-12-16T20:18:24.405Z] Copying: 920/1024 [MB] (14 MBps) [2024-12-16T20:18:25.348Z] Copying: 935/1024 [MB] (15 MBps) [2024-12-16T20:18:26.291Z] Copying: 947/1024 [MB] (11 MBps) [2024-12-16T20:18:27.234Z] Copying: 963/1024 [MB] (15 MBps) [2024-12-16T20:18:28.621Z] Copying: 978/1024 [MB] (15 MBps) [2024-12-16T20:18:29.566Z] Copying: 996/1024 [MB] (17 MBps) [2024-12-16T20:18:30.139Z] Copying: 1010/1024 [MB] (14 MBps) [2024-12-16T20:18:30.139Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 20:18:30.114645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.499 [2024-12-16 20:18:30.114724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:22.499 [2024-12-16 20:18:30.114741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:22.499 [2024-12-16 20:18:30.114751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.499 [2024-12-16 20:18:30.114780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.499 [2024-12-16 20:18:30.118396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.499 [2024-12-16 20:18:30.118454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:22.499 [2024-12-16 20:18:30.118466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:25:22.499 [2024-12-16 20:18:30.118475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.499 [2024-12-16 20:18:30.118775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.499 [2024-12-16 20:18:30.118789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:22.499 [2024-12-16 20:18:30.118801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:25:22.499 [2024-12-16 20:18:30.118810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.499 [2024-12-16 20:18:30.122531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.499 [2024-12-16 20:18:30.122574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:22.499 [2024-12-16 20:18:30.122589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:25:22.499 [2024-12-16 20:18:30.122598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.499 [2024-12-16 20:18:30.129563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.499 [2024-12-16 20:18:30.129783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:22.499 [2024-12-16 20:18:30.129807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.946 ms 00:25:22.499 [2024-12-16 20:18:30.129817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.157983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.158034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:22.761 [2024-12-16 20:18:30.158048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.072 ms 00:25:22.761 [2024-12-16 20:18:30.158056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.175236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.175289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:22.761 [2024-12-16 20:18:30.175321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.128 ms 00:25:22.761 [2024-12-16 20:18:30.175337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.184281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.184344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:22.761 [2024-12-16 20:18:30.184356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.885 ms 00:25:22.761 [2024-12-16 20:18:30.184364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.211386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.211597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:22.761 [2024-12-16 20:18:30.211621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.006 ms 00:25:22.761 [2024-12-16 20:18:30.211630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.238337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.238540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:22.761 [2024-12-16 20:18:30.238576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.580 ms 00:25:22.761 [2024-12-16 20:18:30.238584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.264804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.264851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:22.761 [2024-12-16 20:18:30.264863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.097 ms 00:25:22.761 [2024-12-16 20:18:30.264871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.290438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.761 [2024-12-16 20:18:30.290488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:22.761 [2024-12-16 20:18:30.290500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.460 ms 00:25:22.761 [2024-12-16 20:18:30.290507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.761 [2024-12-16 20:18:30.290556] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:22.761 [2024-12-16 20:18:30.290580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:22.761 [2024-12-16 20:18:30.290592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:22.761 [2024-12-16 20:18:30.290600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:22.761 [2024-12-16 20:18:30.290766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.290992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:22.762 [2024-12-16 20:18:30.291388] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:22.762 [2024-12-16 20:18:30.291397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 695c319e-f996-4308-bf6d-88949793fc08 00:25:22.762 [2024-12-16 20:18:30.291405] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:22.762 [2024-12-16 20:18:30.291414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:22.762 [2024-12-16 20:18:30.291422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:22.762 [2024-12-16 20:18:30.291430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:22.762 [2024-12-16 20:18:30.291438] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:22.762 [2024-12-16 20:18:30.291446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:22.762 [2024-12-16 20:18:30.291454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:22.762 [2024-12-16 20:18:30.291469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:22.762 [2024-12-16 20:18:30.291476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:22.762 [2024-12-16 20:18:30.291483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.762 [2024-12-16 20:18:30.291491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:22.762 [2024-12-16 20:18:30.291504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:25:22.762 [2024-12-16 20:18:30.291513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.762 [2024-12-16 20:18:30.305080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.762 [2024-12-16 20:18:30.305128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:22.762 [2024-12-16 20:18:30.305140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.531 ms 00:25:22.762 [2024-12-16 20:18:30.305148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.763 [2024-12-16 20:18:30.305404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.763 [2024-12-16 20:18:30.305415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:22.763 [2024-12-16 20:18:30.305425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:25:22.763 [2024-12-16 20:18:30.305433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.763 [2024-12-16 20:18:30.344515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.763 [2024-12-16 20:18:30.344568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.763 [2024-12-16 20:18:30.344581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.763 [2024-12-16 20:18:30.344589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.763 [2024-12-16 20:18:30.344665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.763 [2024-12-16 20:18:30.344675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.763 [2024-12-16 20:18:30.344684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.763 [2024-12-16 20:18:30.344692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.763 [2024-12-16 20:18:30.344774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.763 [2024-12-16 20:18:30.344785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.763 [2024-12-16 20:18:30.344794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.763 [2024-12-16 20:18:30.344803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.763 [2024-12-16 20:18:30.344819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.763 [2024-12-16 20:18:30.344832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.763 [2024-12-16 20:18:30.344840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.763 [2024-12-16 20:18:30.344848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.425702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.425760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.024 [2024-12-16 20:18:30.425773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.425781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.024 [2024-12-16 20:18:30.458350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.024 [2024-12-16 20:18:30.458446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.024 [2024-12-16 20:18:30.458521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.024 [2024-12-16 20:18:30.458655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.024 [2024-12-16 20:18:30.458711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.024 [2024-12-16 20:18:30.458784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.024 [2024-12-16 20:18:30.458847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.024 [2024-12-16 20:18:30.458859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.024 [2024-12-16 20:18:30.458867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.024 [2024-12-16 20:18:30.458999] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.324 ms, result 0 00:25:24.011 00:25:24.011 00:25:24.011 20:18:31 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:26.561 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75568 00:25:26.561 20:18:33 -- common/autotest_common.sh@936 -- # '[' -z 75568 ']' 00:25:26.561 20:18:33 -- common/autotest_common.sh@940 -- # kill -0 75568 00:25:26.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75568) - No such process 00:25:26.561 Process with pid 75568 is not found 00:25:26.561 20:18:33 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75568 is not found' 00:25:26.561 20:18:33 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:26.561 20:18:34 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:26.561 20:18:34 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:26.561 Remove shared memory files 00:25:26.561 20:18:34 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:26.561 20:18:34 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:26.561 20:18:34 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:26.561 20:18:34 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:26.561 20:18:34 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:26.561 ************************************ 00:25:26.561 END TEST ftl_dirty_shutdown 00:25:26.561 ************************************ 00:25:26.561 00:25:26.561 real 3m43.820s 00:25:26.561 user 4m3.440s 00:25:26.561 sys 0m25.311s 00:25:26.561 20:18:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:26.561 20:18:34 -- common/autotest_common.sh@10 -- # set +x 00:25:26.561 20:18:34 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:26.561 20:18:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:26.561 20:18:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:26.561 20:18:34 -- common/autotest_common.sh@10 -- # set +x 00:25:26.561 ************************************ 00:25:26.561 START TEST ftl_upgrade_shutdown 00:25:26.561 ************************************ 00:25:26.561 20:18:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:26.823 * Looking for test storage... 00:25:26.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.823 20:18:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:26.823 20:18:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:26.823 20:18:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:26.823 20:18:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:26.823 20:18:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:26.823 20:18:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:26.823 20:18:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:26.823 20:18:34 -- scripts/common.sh@335 -- # IFS=.-: 00:25:26.823 20:18:34 -- scripts/common.sh@335 -- # read -ra ver1 00:25:26.823 20:18:34 -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.823 20:18:34 -- scripts/common.sh@336 -- # read -ra ver2 00:25:26.823 20:18:34 -- scripts/common.sh@337 -- # local 'op=<' 00:25:26.823 20:18:34 -- scripts/common.sh@339 -- # ver1_l=2 00:25:26.823 20:18:34 -- scripts/common.sh@340 -- # ver2_l=1 00:25:26.823 20:18:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:26.823 20:18:34 -- scripts/common.sh@343 -- # case "$op" in 00:25:26.823 20:18:34 -- scripts/common.sh@344 -- # : 1 00:25:26.823 20:18:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:26.823 20:18:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.823 20:18:34 -- scripts/common.sh@364 -- # decimal 1 00:25:26.823 20:18:34 -- scripts/common.sh@352 -- # local d=1 00:25:26.823 20:18:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.823 20:18:34 -- scripts/common.sh@354 -- # echo 1 00:25:26.823 20:18:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:26.823 20:18:34 -- scripts/common.sh@365 -- # decimal 2 00:25:26.823 20:18:34 -- scripts/common.sh@352 -- # local d=2 00:25:26.823 20:18:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.823 20:18:34 -- scripts/common.sh@354 -- # echo 2 00:25:26.823 20:18:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:26.823 20:18:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:26.823 20:18:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:26.823 20:18:34 -- scripts/common.sh@367 -- # return 0 00:25:26.823 20:18:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.823 20:18:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.823 --rc genhtml_branch_coverage=1 00:25:26.823 --rc genhtml_function_coverage=1 00:25:26.823 --rc genhtml_legend=1 00:25:26.823 --rc geninfo_all_blocks=1 00:25:26.823 --rc geninfo_unexecuted_blocks=1 00:25:26.823 00:25:26.823 ' 00:25:26.823 20:18:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.823 --rc genhtml_branch_coverage=1 00:25:26.823 --rc genhtml_function_coverage=1 00:25:26.823 --rc genhtml_legend=1 00:25:26.823 --rc geninfo_all_blocks=1 00:25:26.823 --rc geninfo_unexecuted_blocks=1 00:25:26.823 00:25:26.823 ' 00:25:26.823 20:18:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.823 --rc genhtml_branch_coverage=1 00:25:26.823 --rc genhtml_function_coverage=1 00:25:26.823 --rc genhtml_legend=1 00:25:26.823 --rc geninfo_all_blocks=1 00:25:26.823 --rc geninfo_unexecuted_blocks=1 00:25:26.823 00:25:26.823 ' 00:25:26.823 20:18:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.823 --rc genhtml_branch_coverage=1 00:25:26.823 --rc genhtml_function_coverage=1 00:25:26.823 --rc genhtml_legend=1 00:25:26.823 --rc geninfo_all_blocks=1 00:25:26.823 --rc geninfo_unexecuted_blocks=1 00:25:26.823 00:25:26.823 ' 00:25:26.823 20:18:34 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:26.823 20:18:34 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:26.823 20:18:34 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.823 20:18:34 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.823 20:18:34 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:26.823 20:18:34 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:26.823 20:18:34 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.823 20:18:34 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:26.823 20:18:34 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:26.823 20:18:34 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.823 20:18:34 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.823 20:18:34 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:26.823 20:18:34 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:26.823 20:18:34 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.823 20:18:34 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.823 20:18:34 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:26.823 20:18:34 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:26.823 20:18:34 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.823 20:18:34 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.823 20:18:34 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:26.823 20:18:34 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:26.823 20:18:34 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:26.823 20:18:34 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:26.823 20:18:34 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:26.823 20:18:34 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:26.823 20:18:34 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:26.823 20:18:34 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:26.823 20:18:34 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:26.823 20:18:34 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:26.823 20:18:34 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:26.824 20:18:34 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:26.824 20:18:34 -- ftl/common.sh@81 -- # local base_bdev= 00:25:26.824 20:18:34 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:26.824 20:18:34 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:26.824 20:18:34 -- ftl/common.sh@89 -- # spdk_tgt_pid=78025 00:25:26.824 20:18:34 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:26.824 20:18:34 -- ftl/common.sh@91 -- # waitforlisten 78025 00:25:26.824 20:18:34 -- common/autotest_common.sh@829 -- # '[' -z 78025 ']' 00:25:26.824 20:18:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.824 20:18:34 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:26.824 20:18:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.824 20:18:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.824 20:18:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.824 20:18:34 -- common/autotest_common.sh@10 -- # set +x 00:25:26.824 [2024-12-16 20:18:34.455045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:26.824 [2024-12-16 20:18:34.455698] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78025 ] 00:25:27.085 [2024-12-16 20:18:34.608456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.345 [2024-12-16 20:18:34.835768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:27.345 [2024-12-16 20:18:34.835980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.735 20:18:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.735 20:18:35 -- common/autotest_common.sh@862 -- # return 0 00:25:28.735 20:18:35 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:28.735 20:18:35 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:28.735 20:18:35 -- ftl/common.sh@99 -- # local params 00:25:28.735 20:18:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:28.735 20:18:35 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:28.735 20:18:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:28.735 20:18:35 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:25:28.735 20:18:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:28.735 20:18:35 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:28.735 20:18:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:28.735 20:18:35 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:25:28.735 20:18:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:28.735 20:18:35 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:28.735 20:18:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:28.735 20:18:35 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:28.735 20:18:35 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:25:28.735 20:18:35 -- ftl/common.sh@54 -- # local name=base 00:25:28.735 20:18:35 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:25:28.735 20:18:35 -- ftl/common.sh@56 -- # local size=20480 00:25:28.735 20:18:35 -- ftl/common.sh@59 -- # local base_bdev 00:25:28.735 20:18:35 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:25:28.735 20:18:36 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:28.735 20:18:36 -- ftl/common.sh@62 -- # local base_size 00:25:28.735 20:18:36 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:28.735 20:18:36 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:25:28.735 20:18:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:28.735 20:18:36 -- common/autotest_common.sh@1369 -- # local bs 00:25:28.735 20:18:36 -- common/autotest_common.sh@1370 -- # local nb 00:25:28.735 20:18:36 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:28.997 20:18:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:28.997 { 00:25:28.997 "name": "basen1", 00:25:28.997 "aliases": [ 00:25:28.997 "51258f41-31f9-4bf5-8925-c2567a1dfb7b" 00:25:28.997 ], 00:25:28.997 "product_name": "NVMe disk", 00:25:28.997 "block_size": 4096, 00:25:28.997 "num_blocks": 1310720, 00:25:28.997 "uuid": "51258f41-31f9-4bf5-8925-c2567a1dfb7b", 00:25:28.997 "assigned_rate_limits": { 00:25:28.997 "rw_ios_per_sec": 0, 00:25:28.997 "rw_mbytes_per_sec": 0, 00:25:28.997 "r_mbytes_per_sec": 0, 00:25:28.997 "w_mbytes_per_sec": 0 00:25:28.997 }, 00:25:28.997 "claimed": true, 00:25:28.997 "claim_type": "read_many_write_one", 00:25:28.997 "zoned": false, 00:25:28.997 "supported_io_types": { 00:25:28.997 "read": true, 00:25:28.997 "write": true, 00:25:28.997 "unmap": true, 00:25:28.997 "write_zeroes": true, 00:25:28.997 "flush": true, 00:25:28.997 "reset": true, 00:25:28.997 "compare": true, 00:25:28.997 "compare_and_write": false, 00:25:28.997 "abort": true, 00:25:28.997 "nvme_admin": true, 00:25:28.997 "nvme_io": true 00:25:28.997 }, 00:25:28.997 "driver_specific": { 00:25:28.997 "nvme": [ 00:25:28.997 { 00:25:28.997 "pci_address": "0000:00:07.0", 00:25:28.997 "trid": { 00:25:28.997 "trtype": "PCIe", 00:25:28.997 "traddr": "0000:00:07.0" 00:25:28.997 }, 00:25:28.997 "ctrlr_data": { 00:25:28.997 "cntlid": 0, 00:25:28.997 "vendor_id": "0x1b36", 00:25:28.997 "model_number": "QEMU NVMe Ctrl", 00:25:28.997 "serial_number": "12341", 00:25:28.997 "firmware_revision": "8.0.0", 00:25:28.997 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:28.997 "oacs": { 00:25:28.997 "security": 0, 00:25:28.997 "format": 1, 00:25:28.997 "firmware": 0, 00:25:28.997 "ns_manage": 1 00:25:28.997 }, 00:25:28.997 "multi_ctrlr": false, 00:25:28.997 "ana_reporting": false 00:25:28.997 }, 00:25:28.997 "vs": { 00:25:28.997 "nvme_version": "1.4" 00:25:28.997 }, 00:25:28.997 "ns_data": { 00:25:28.997 "id": 1, 00:25:28.997 "can_share": false 00:25:28.997 } 00:25:28.997 } 00:25:28.997 ], 00:25:28.997 "mp_policy": "active_passive" 00:25:28.997 } 00:25:28.997 } 00:25:28.997 ]' 00:25:28.997 20:18:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:28.997 20:18:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:28.997 20:18:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:28.997 20:18:36 -- common/autotest_common.sh@1373 -- # nb=1310720 00:25:28.997 20:18:36 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:25:28.997 20:18:36 -- common/autotest_common.sh@1377 -- # echo 5120 00:25:28.997 20:18:36 -- ftl/common.sh@63 -- # base_size=5120 00:25:28.997 20:18:36 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:28.997 20:18:36 -- ftl/common.sh@67 -- # clear_lvols 00:25:28.997 20:18:36 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:28.997 20:18:36 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:29.258 20:18:36 -- ftl/common.sh@28 -- # stores=1e3d1e40-9d28-4548-a7c7-f066ad5b8d26 00:25:29.258 20:18:36 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:29.258 20:18:36 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e3d1e40-9d28-4548-a7c7-f066ad5b8d26 00:25:29.519 20:18:36 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:29.781 20:18:37 -- ftl/common.sh@68 -- # lvs=8fee2bce-16b6-4cf1-835c-1fb8a97a5fb9 00:25:29.781 20:18:37 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 8fee2bce-16b6-4cf1-835c-1fb8a97a5fb9 00:25:29.781 20:18:37 -- ftl/common.sh@107 -- # base_bdev=4243dd47-fa5e-4670-b09d-c6353ec7584a 00:25:29.781 20:18:37 -- ftl/common.sh@108 -- # [[ -z 4243dd47-fa5e-4670-b09d-c6353ec7584a ]] 00:25:29.781 20:18:37 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 4243dd47-fa5e-4670-b09d-c6353ec7584a 5120 00:25:29.781 20:18:37 -- ftl/common.sh@35 -- # local name=cache 00:25:29.781 20:18:37 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:25:29.781 20:18:37 -- ftl/common.sh@37 -- # local base_bdev=4243dd47-fa5e-4670-b09d-c6353ec7584a 00:25:29.781 20:18:37 -- ftl/common.sh@38 -- # local cache_size=5120 00:25:29.781 20:18:37 -- ftl/common.sh@41 -- # get_bdev_size 4243dd47-fa5e-4670-b09d-c6353ec7584a 00:25:29.781 20:18:37 -- common/autotest_common.sh@1367 -- # local bdev_name=4243dd47-fa5e-4670-b09d-c6353ec7584a 00:25:29.782 20:18:37 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:29.782 20:18:37 -- common/autotest_common.sh@1369 -- # local bs 00:25:29.782 20:18:37 -- common/autotest_common.sh@1370 -- # local nb 00:25:29.782 20:18:37 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4243dd47-fa5e-4670-b09d-c6353ec7584a 00:25:30.042 20:18:37 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:30.042 { 00:25:30.042 "name": "4243dd47-fa5e-4670-b09d-c6353ec7584a", 00:25:30.042 "aliases": [ 00:25:30.042 "lvs/basen1p0" 00:25:30.042 ], 00:25:30.042 "product_name": "Logical Volume", 00:25:30.042 "block_size": 4096, 00:25:30.042 "num_blocks": 5242880, 00:25:30.042 "uuid": "4243dd47-fa5e-4670-b09d-c6353ec7584a", 00:25:30.042 "assigned_rate_limits": { 00:25:30.042 "rw_ios_per_sec": 0, 00:25:30.042 "rw_mbytes_per_sec": 0, 00:25:30.042 "r_mbytes_per_sec": 0, 00:25:30.043 "w_mbytes_per_sec": 0 00:25:30.043 }, 00:25:30.043 "claimed": false, 00:25:30.043 "zoned": false, 00:25:30.043 "supported_io_types": { 00:25:30.043 "read": true, 00:25:30.043 "write": true, 00:25:30.043 "unmap": true, 00:25:30.043 "write_zeroes": true, 00:25:30.043 "flush": false, 00:25:30.043 "reset": true, 00:25:30.043 "compare": false, 00:25:30.043 "compare_and_write": false, 00:25:30.043 "abort": false, 00:25:30.043 "nvme_admin": false, 00:25:30.043 "nvme_io": false 00:25:30.043 }, 00:25:30.043 "driver_specific": { 00:25:30.043 "lvol": { 00:25:30.043 "lvol_store_uuid": "8fee2bce-16b6-4cf1-835c-1fb8a97a5fb9", 00:25:30.043 "base_bdev": "basen1", 00:25:30.043 "thin_provision": true, 00:25:30.043 "snapshot": false, 00:25:30.043 "clone": false, 00:25:30.043 "esnap_clone": false 00:25:30.043 } 00:25:30.043 } 00:25:30.043 } 00:25:30.043 ]' 00:25:30.043 20:18:37 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:30.043 20:18:37 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:30.043 20:18:37 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:30.043 20:18:37 -- common/autotest_common.sh@1373 -- # nb=5242880 00:25:30.043 20:18:37 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:25:30.043 20:18:37 -- common/autotest_common.sh@1377 -- # echo 20480 00:25:30.043 20:18:37 -- ftl/common.sh@41 -- # local base_size=1024 00:25:30.043 20:18:37 -- ftl/common.sh@44 -- # local nvc_bdev 00:25:30.043 20:18:37 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:25:30.303 20:18:37 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:30.303 20:18:37 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:30.303 20:18:37 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:30.564 20:18:38 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:30.564 20:18:38 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:30.564 20:18:38 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4243dd47-fa5e-4670-b09d-c6353ec7584a -c cachen1p0 --l2p_dram_limit 2 00:25:30.826 [2024-12-16 20:18:38.246006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.246046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:30.826 [2024-12-16 20:18:38.246061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:30.826 [2024-12-16 20:18:38.246071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.246126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.246135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:30.826 [2024-12-16 20:18:38.246145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:25:30.826 [2024-12-16 20:18:38.246152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.246172] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:30.826 [2024-12-16 20:18:38.246923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:30.826 [2024-12-16 20:18:38.246943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.246951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:30.826 [2024-12-16 20:18:38.246962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.773 ms 00:25:30.826 [2024-12-16 20:18:38.246970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.247002] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID dbd316a2-e20b-4127-a678-9a1761b2de7f 00:25:30.826 [2024-12-16 20:18:38.248083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.248110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:30.826 [2024-12-16 20:18:38.248120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:25:30.826 [2024-12-16 20:18:38.248129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.253449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.253476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:30.826 [2024-12-16 20:18:38.253485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.275 ms 00:25:30.826 [2024-12-16 20:18:38.253493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.253567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.253578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:30.826 [2024-12-16 20:18:38.253587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:30.826 [2024-12-16 20:18:38.253597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.253637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.253650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:30.826 [2024-12-16 20:18:38.253658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:30.826 [2024-12-16 20:18:38.253667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.253691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:30.826 [2024-12-16 20:18:38.257394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.257417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:30.826 [2024-12-16 20:18:38.257428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.709 ms 00:25:30.826 [2024-12-16 20:18:38.257435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.826 [2024-12-16 20:18:38.257463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.826 [2024-12-16 20:18:38.257471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:30.827 [2024-12-16 20:18:38.257480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:30.827 [2024-12-16 20:18:38.257487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.827 [2024-12-16 20:18:38.257510] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:30.827 [2024-12-16 20:18:38.257620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:30.827 [2024-12-16 20:18:38.257635] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:30.827 [2024-12-16 20:18:38.257645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:30.827 [2024-12-16 20:18:38.257656] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:30.827 [2024-12-16 20:18:38.257665] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:30.827 [2024-12-16 20:18:38.257676] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:30.827 [2024-12-16 20:18:38.257684] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:30.827 [2024-12-16 20:18:38.257694] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:30.827 [2024-12-16 20:18:38.257700] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:30.827 [2024-12-16 20:18:38.257709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.827 [2024-12-16 20:18:38.257723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:30.827 [2024-12-16 20:18:38.257731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:25:30.827 [2024-12-16 20:18:38.257738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.827 [2024-12-16 20:18:38.257801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.827 [2024-12-16 20:18:38.257814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:30.827 [2024-12-16 20:18:38.257823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:25:30.827 [2024-12-16 20:18:38.257832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.827 [2024-12-16 20:18:38.257915] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:30.827 [2024-12-16 20:18:38.257924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:30.827 [2024-12-16 20:18:38.257933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:30.827 [2024-12-16 20:18:38.257940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.257949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:30.827 [2024-12-16 20:18:38.257956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.257965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:30.827 [2024-12-16 20:18:38.257971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:30.827 [2024-12-16 20:18:38.257979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:30.827 [2024-12-16 20:18:38.257986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.257994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:30.827 [2024-12-16 20:18:38.258000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:30.827 [2024-12-16 20:18:38.258019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:30.827 [2024-12-16 20:18:38.258034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:30.827 [2024-12-16 20:18:38.258057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:30.827 [2024-12-16 20:18:38.258065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:30.827 [2024-12-16 20:18:38.258079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:30.827 [2024-12-16 20:18:38.258086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:30.827 [2024-12-16 20:18:38.258101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:30.827 [2024-12-16 20:18:38.258109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:30.827 [2024-12-16 20:18:38.258123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:30.827 [2024-12-16 20:18:38.258129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:30.827 [2024-12-16 20:18:38.258144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:30.827 [2024-12-16 20:18:38.258152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:30.827 [2024-12-16 20:18:38.258168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:30.827 [2024-12-16 20:18:38.258174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:30.827 [2024-12-16 20:18:38.258189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:30.827 [2024-12-16 20:18:38.258197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:30.827 [2024-12-16 20:18:38.258213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258226] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:30.827 [2024-12-16 20:18:38.258233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:30.827 [2024-12-16 20:18:38.258242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:30.827 [2024-12-16 20:18:38.258261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:30.827 [2024-12-16 20:18:38.258268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:30.827 [2024-12-16 20:18:38.258275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:30.827 [2024-12-16 20:18:38.258282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:30.827 [2024-12-16 20:18:38.258291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:30.827 [2024-12-16 20:18:38.258311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:30.827 [2024-12-16 20:18:38.258321] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:30.827 [2024-12-16 20:18:38.258331] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:30.827 [2024-12-16 20:18:38.258349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:30.827 [2024-12-16 20:18:38.258373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:30.827 [2024-12-16 20:18:38.258380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:30.827 [2024-12-16 20:18:38.258388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:30.827 [2024-12-16 20:18:38.258395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:30.827 [2024-12-16 20:18:38.258439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:30.827 [2024-12-16 20:18:38.258446] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:30.827 [2024-12-16 20:18:38.258455] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258463] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.827 [2024-12-16 20:18:38.258471] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:30.827 [2024-12-16 20:18:38.258478] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:30.827 [2024-12-16 20:18:38.258486] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:30.827 [2024-12-16 20:18:38.258494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.827 [2024-12-16 20:18:38.258502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:30.827 [2024-12-16 20:18:38.258509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:25:30.827 [2024-12-16 20:18:38.258518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.827 [2024-12-16 20:18:38.273211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.827 [2024-12-16 20:18:38.273245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:30.827 [2024-12-16 20:18:38.273254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.655 ms 00:25:30.827 [2024-12-16 20:18:38.273263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.827 [2024-12-16 20:18:38.273312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.827 [2024-12-16 20:18:38.273325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:30.827 [2024-12-16 20:18:38.273334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:30.827 [2024-12-16 20:18:38.273343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.303945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.303976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:30.828 [2024-12-16 20:18:38.303986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.561 ms 00:25:30.828 [2024-12-16 20:18:38.303995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.304023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.304032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:30.828 [2024-12-16 20:18:38.304040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:30.828 [2024-12-16 20:18:38.304048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.304420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.304443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:30.828 [2024-12-16 20:18:38.304453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:25:30.828 [2024-12-16 20:18:38.304461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.304505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.304525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:30.828 [2024-12-16 20:18:38.304533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:30.828 [2024-12-16 20:18:38.304542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.319698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.319727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:30.828 [2024-12-16 20:18:38.319737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.140 ms 00:25:30.828 [2024-12-16 20:18:38.319747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.331386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:30.828 [2024-12-16 20:18:38.332263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.332287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:30.828 [2024-12-16 20:18:38.332314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.439 ms 00:25:30.828 [2024-12-16 20:18:38.332322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.359138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:30.828 [2024-12-16 20:18:38.359172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:30.828 [2024-12-16 20:18:38.359184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.790 ms 00:25:30.828 [2024-12-16 20:18:38.359192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:30.828 [2024-12-16 20:18:38.359233] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:25:30.828 [2024-12-16 20:18:38.359244] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:25:35.039 [2024-12-16 20:18:42.094727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.094773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:35.039 [2024-12-16 20:18:42.094787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3735.479 ms 00:25:35.039 [2024-12-16 20:18:42.094794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.094873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.094882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:35.039 [2024-12-16 20:18:42.094892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:25:35.039 [2024-12-16 20:18:42.094898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.112401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.112428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:35.039 [2024-12-16 20:18:42.112438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.467 ms 00:25:35.039 [2024-12-16 20:18:42.112445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.129619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.129640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:35.039 [2024-12-16 20:18:42.129651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.142 ms 00:25:35.039 [2024-12-16 20:18:42.129656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.129897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.129904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:35.039 [2024-12-16 20:18:42.129912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:25:35.039 [2024-12-16 20:18:42.129918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.182265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.182311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:35.039 [2024-12-16 20:18:42.182327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 52.315 ms 00:25:35.039 [2024-12-16 20:18:42.182336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.206932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.206965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:35.039 [2024-12-16 20:18:42.206978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.548 ms 00:25:35.039 [2024-12-16 20:18:42.206986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.208167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.208195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:35.039 [2024-12-16 20:18:42.208208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.139 ms 00:25:35.039 [2024-12-16 20:18:42.208215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.232290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.232326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:35.039 [2024-12-16 20:18:42.232339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.040 ms 00:25:35.039 [2024-12-16 20:18:42.232345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.232384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.232393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:35.039 [2024-12-16 20:18:42.232403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:35.039 [2024-12-16 20:18:42.232410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.232489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.039 [2024-12-16 20:18:42.232498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:35.039 [2024-12-16 20:18:42.232508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:25:35.039 [2024-12-16 20:18:42.232515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.039 [2024-12-16 20:18:42.233362] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3986.930 ms, result 0 00:25:35.039 { 00:25:35.039 "name": "ftl", 00:25:35.039 "uuid": "dbd316a2-e20b-4127-a678-9a1761b2de7f" 00:25:35.039 } 00:25:35.039 20:18:42 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:35.039 [2024-12-16 20:18:42.432787] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.039 20:18:42 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:35.039 20:18:42 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:35.300 [2024-12-16 20:18:42.793181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:35.300 20:18:42 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:35.562 [2024-12-16 20:18:42.973460] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:35.562 20:18:42 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:35.823 Fill FTL, iteration 1 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:35.823 20:18:43 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:35.823 20:18:43 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:35.823 20:18:43 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:35.823 20:18:43 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:35.823 20:18:43 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:35.823 20:18:43 -- ftl/common.sh@163 -- # spdk_ini_pid=78157 00:25:35.823 20:18:43 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:35.823 20:18:43 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:35.823 20:18:43 -- ftl/common.sh@165 -- # waitforlisten 78157 /var/tmp/spdk.tgt.sock 00:25:35.823 20:18:43 -- common/autotest_common.sh@829 -- # '[' -z 78157 ']' 00:25:35.823 20:18:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:35.823 20:18:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.823 20:18:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:35.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:35.824 20:18:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.824 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:25:35.824 [2024-12-16 20:18:43.363080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:35.824 [2024-12-16 20:18:43.363213] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78157 ] 00:25:36.085 [2024-12-16 20:18:43.513685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.346 [2024-12-16 20:18:43.732785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.346 [2024-12-16 20:18:43.733296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.289 20:18:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.289 20:18:44 -- common/autotest_common.sh@862 -- # return 0 00:25:37.289 20:18:44 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:37.550 ftln1 00:25:37.550 20:18:45 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:37.550 20:18:45 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:37.810 20:18:45 -- ftl/common.sh@173 -- # echo ']}' 00:25:37.810 20:18:45 -- ftl/common.sh@176 -- # killprocess 78157 00:25:37.810 20:18:45 -- common/autotest_common.sh@936 -- # '[' -z 78157 ']' 00:25:37.810 20:18:45 -- common/autotest_common.sh@940 -- # kill -0 78157 00:25:37.810 20:18:45 -- common/autotest_common.sh@941 -- # uname 00:25:37.810 20:18:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:37.810 20:18:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78157 00:25:37.810 killing process with pid 78157 00:25:37.810 20:18:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:37.810 20:18:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:37.810 20:18:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78157' 00:25:37.810 20:18:45 -- common/autotest_common.sh@955 -- # kill 78157 00:25:37.810 20:18:45 -- common/autotest_common.sh@960 -- # wait 78157 00:25:39.727 20:18:47 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:39.727 20:18:47 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:39.727 [2024-12-16 20:18:47.199631] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:39.727 [2024-12-16 20:18:47.199738] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78212 ] 00:25:39.727 [2024-12-16 20:18:47.347215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.988 [2024-12-16 20:18:47.538574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.373  [2024-12-16T20:18:49.947Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-16T20:18:51.323Z] Copying: 440/1024 [MB] (239 MBps) [2024-12-16T20:18:52.258Z] Copying: 678/1024 [MB] (238 MBps) [2024-12-16T20:18:52.517Z] Copying: 912/1024 [MB] (234 MBps) [2024-12-16T20:18:53.085Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:25:45.445 00:25:45.704 Calculate MD5 checksum, iteration 1 00:25:45.704 20:18:53 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:45.704 20:18:53 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:45.704 20:18:53 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:45.704 20:18:53 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:45.704 20:18:53 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:45.704 20:18:53 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:45.704 20:18:53 -- ftl/common.sh@154 -- # return 0 00:25:45.704 20:18:53 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:45.704 [2024-12-16 20:18:53.144126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:45.704 [2024-12-16 20:18:53.144233] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78280 ] 00:25:45.704 [2024-12-16 20:18:53.290344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.963 [2024-12-16 20:18:53.451578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.339  [2024-12-16T20:18:55.558Z] Copying: 673/1024 [MB] (673 MBps) [2024-12-16T20:18:56.171Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:25:48.531 00:25:48.531 20:18:55 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:48.531 20:18:55 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:51.074 20:18:58 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:51.074 20:18:58 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4986e0df62bef4c676ee4ee89ba399fc 00:25:51.074 20:18:58 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:51.074 20:18:58 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:51.074 20:18:58 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:51.074 Fill FTL, iteration 2 00:25:51.074 20:18:58 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:51.074 20:18:58 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:51.074 20:18:58 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:51.074 20:18:58 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:51.074 20:18:58 -- ftl/common.sh@154 -- # return 0 00:25:51.074 20:18:58 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:51.074 [2024-12-16 20:18:58.174802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:51.074 [2024-12-16 20:18:58.175032] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78340 ] 00:25:51.074 [2024-12-16 20:18:58.320478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.074 [2024-12-16 20:18:58.484851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.450  [2024-12-16T20:19:01.026Z] Copying: 248/1024 [MB] (248 MBps) [2024-12-16T20:19:01.961Z] Copying: 470/1024 [MB] (222 MBps) [2024-12-16T20:19:02.896Z] Copying: 700/1024 [MB] (230 MBps) [2024-12-16T20:19:03.463Z] Copying: 925/1024 [MB] (225 MBps) [2024-12-16T20:19:04.031Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:25:56.391 00:25:56.391 Calculate MD5 checksum, iteration 2 00:25:56.391 20:19:03 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:56.392 20:19:03 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:56.392 20:19:03 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:56.392 20:19:03 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:56.392 20:19:03 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:56.392 20:19:03 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:56.392 20:19:03 -- ftl/common.sh@154 -- # return 0 00:25:56.392 20:19:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:56.392 [2024-12-16 20:19:03.947187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:56.392 [2024-12-16 20:19:03.947271] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78404 ] 00:25:56.651 [2024-12-16 20:19:04.089722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.651 [2024-12-16 20:19:04.255074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.554  [2024-12-16T20:19:06.452Z] Copying: 656/1024 [MB] (656 MBps) [2024-12-16T20:19:07.387Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:25:59.747 00:25:59.747 20:19:07 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:59.747 20:19:07 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:02.287 20:19:09 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:02.287 20:19:09 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8db0cf07393cc82fe4430dc8f4f16bb8 00:26:02.287 20:19:09 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:02.287 20:19:09 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:02.287 20:19:09 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:02.287 [2024-12-16 20:19:09.572037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.287 [2024-12-16 20:19:09.572186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:02.287 [2024-12-16 20:19:09.572204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:02.287 [2024-12-16 20:19:09.572214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.287 [2024-12-16 20:19:09.572239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.287 [2024-12-16 20:19:09.572246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:02.287 [2024-12-16 20:19:09.572252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:02.287 [2024-12-16 20:19:09.572258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.287 [2024-12-16 20:19:09.572273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.287 [2024-12-16 20:19:09.572279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:02.287 [2024-12-16 20:19:09.572290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:02.287 [2024-12-16 20:19:09.572296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.287 [2024-12-16 20:19:09.572361] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.314 ms, result 0 00:26:02.287 true 00:26:02.287 20:19:09 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:02.287 { 00:26:02.287 "name": "ftl", 00:26:02.287 "properties": [ 00:26:02.287 { 00:26:02.287 "name": "superblock_version", 00:26:02.287 "value": 5, 00:26:02.287 "read-only": true 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "name": "base_device", 00:26:02.287 "bands": [ 00:26:02.287 { 00:26:02.287 "id": 0, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 1, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 2, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 3, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 4, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 5, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 6, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 7, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 8, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 9, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 10, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 11, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 12, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 13, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 14, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 15, 00:26:02.287 "state": "FREE", 00:26:02.287 "validity": 0.0 00:26:02.287 }, 00:26:02.287 { 00:26:02.287 "id": 16, 00:26:02.287 "state": "FREE", 00:26:02.288 "validity": 0.0 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "id": 17, 00:26:02.288 "state": "FREE", 00:26:02.288 "validity": 0.0 00:26:02.288 } 00:26:02.288 ], 00:26:02.288 "read-only": true 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "name": "cache_device", 00:26:02.288 "type": "bdev", 00:26:02.288 "chunks": [ 00:26:02.288 { 00:26:02.288 "id": 0, 00:26:02.288 "state": "CLOSED", 00:26:02.288 "utilization": 1.0 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "id": 1, 00:26:02.288 "state": "CLOSED", 00:26:02.288 "utilization": 1.0 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "id": 2, 00:26:02.288 "state": "OPEN", 00:26:02.288 "utilization": 0.001953125 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "id": 3, 00:26:02.288 "state": "OPEN", 00:26:02.288 "utilization": 0.0 00:26:02.288 } 00:26:02.288 ], 00:26:02.288 "read-only": true 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "name": "verbose_mode", 00:26:02.288 "value": true, 00:26:02.288 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:02.288 }, 00:26:02.288 { 00:26:02.288 "name": "prep_upgrade_on_shutdown", 00:26:02.288 "value": false, 00:26:02.288 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:02.288 } 00:26:02.288 ] 00:26:02.288 } 00:26:02.288 20:19:09 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:02.547 [2024-12-16 20:19:09.940336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.547 [2024-12-16 20:19:09.940370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:02.548 [2024-12-16 20:19:09.940378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:02.548 [2024-12-16 20:19:09.940384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.548 [2024-12-16 20:19:09.940400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.548 [2024-12-16 20:19:09.940406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:02.548 [2024-12-16 20:19:09.940413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:02.548 [2024-12-16 20:19:09.940418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.548 [2024-12-16 20:19:09.940433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.548 [2024-12-16 20:19:09.940439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:02.548 [2024-12-16 20:19:09.940444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:02.548 [2024-12-16 20:19:09.940450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.548 [2024-12-16 20:19:09.940491] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:26:02.548 true 00:26:02.548 20:19:09 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:02.548 20:19:09 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:02.548 20:19:09 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:02.548 20:19:10 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:02.548 20:19:10 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:02.548 20:19:10 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:02.808 [2024-12-16 20:19:10.324661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.808 [2024-12-16 20:19:10.324695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:02.808 [2024-12-16 20:19:10.324703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:02.808 [2024-12-16 20:19:10.324709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.808 [2024-12-16 20:19:10.324725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.808 [2024-12-16 20:19:10.324732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:02.808 [2024-12-16 20:19:10.324737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:02.808 [2024-12-16 20:19:10.324743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.808 [2024-12-16 20:19:10.324758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.808 [2024-12-16 20:19:10.324763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:02.808 [2024-12-16 20:19:10.324769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:02.808 [2024-12-16 20:19:10.324774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.808 [2024-12-16 20:19:10.324816] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.145 ms, result 0 00:26:02.808 true 00:26:02.808 20:19:10 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:03.069 { 00:26:03.069 "name": "ftl", 00:26:03.069 "properties": [ 00:26:03.069 { 00:26:03.069 "name": "superblock_version", 00:26:03.069 "value": 5, 00:26:03.069 "read-only": true 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "name": "base_device", 00:26:03.069 "bands": [ 00:26:03.069 { 00:26:03.069 "id": 0, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 1, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 2, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 3, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 4, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 5, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 6, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.069 "id": 7, 00:26:03.069 "state": "FREE", 00:26:03.069 "validity": 0.0 00:26:03.069 }, 00:26:03.069 { 00:26:03.070 "id": 8, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 9, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 10, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 11, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 12, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 13, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 14, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 15, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 16, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 17, 00:26:03.070 "state": "FREE", 00:26:03.070 "validity": 0.0 00:26:03.070 } 00:26:03.070 ], 00:26:03.070 "read-only": true 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "name": "cache_device", 00:26:03.070 "type": "bdev", 00:26:03.070 "chunks": [ 00:26:03.070 { 00:26:03.070 "id": 0, 00:26:03.070 "state": "CLOSED", 00:26:03.070 "utilization": 1.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 1, 00:26:03.070 "state": "CLOSED", 00:26:03.070 "utilization": 1.0 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 2, 00:26:03.070 "state": "OPEN", 00:26:03.070 "utilization": 0.001953125 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "id": 3, 00:26:03.070 "state": "OPEN", 00:26:03.070 "utilization": 0.0 00:26:03.070 } 00:26:03.070 ], 00:26:03.070 "read-only": true 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "name": "verbose_mode", 00:26:03.070 "value": true, 00:26:03.070 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:03.070 }, 00:26:03.070 { 00:26:03.070 "name": "prep_upgrade_on_shutdown", 00:26:03.070 "value": true, 00:26:03.070 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:03.070 } 00:26:03.070 ] 00:26:03.070 } 00:26:03.070 20:19:10 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:03.070 20:19:10 -- ftl/common.sh@130 -- # [[ -n 78025 ]] 00:26:03.070 20:19:10 -- ftl/common.sh@131 -- # killprocess 78025 00:26:03.070 20:19:10 -- common/autotest_common.sh@936 -- # '[' -z 78025 ']' 00:26:03.070 20:19:10 -- common/autotest_common.sh@940 -- # kill -0 78025 00:26:03.070 20:19:10 -- common/autotest_common.sh@941 -- # uname 00:26:03.070 20:19:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:03.070 20:19:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78025 00:26:03.070 20:19:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:03.070 20:19:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:03.070 20:19:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78025' 00:26:03.070 killing process with pid 78025 00:26:03.070 20:19:10 -- common/autotest_common.sh@955 -- # kill 78025 00:26:03.070 20:19:10 -- common/autotest_common.sh@960 -- # wait 78025 00:26:03.641 [2024-12-16 20:19:11.076077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:03.641 [2024-12-16 20:19:11.086589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:03.641 [2024-12-16 20:19:11.086711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:03.641 [2024-12-16 20:19:11.086767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:03.641 [2024-12-16 20:19:11.086786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:03.641 [2024-12-16 20:19:11.086816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:03.641 [2024-12-16 20:19:11.088914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:03.641 [2024-12-16 20:19:11.089000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:03.641 [2024-12-16 20:19:11.089047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.069 ms 00:26:03.641 [2024-12-16 20:19:11.089064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.583365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.583544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:13.647 [2024-12-16 20:19:19.583596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8494.238 ms 00:26:13.647 [2024-12-16 20:19:19.583615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.584666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.584744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:13.647 [2024-12-16 20:19:19.584788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.024 ms 00:26:13.647 [2024-12-16 20:19:19.584805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.585669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.585735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:13.647 [2024-12-16 20:19:19.585776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:26:13.647 [2024-12-16 20:19:19.585793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.593537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.593625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:13.647 [2024-12-16 20:19:19.593670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.700 ms 00:26:13.647 [2024-12-16 20:19:19.593686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.598812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.598904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:13.647 [2024-12-16 20:19:19.598916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.097 ms 00:26:13.647 [2024-12-16 20:19:19.598922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.598985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.598993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:13.647 [2024-12-16 20:19:19.599000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:13.647 [2024-12-16 20:19:19.599009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.606195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.606220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:13.647 [2024-12-16 20:19:19.606228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.175 ms 00:26:13.647 [2024-12-16 20:19:19.606234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.613602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.613624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:13.647 [2024-12-16 20:19:19.613631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.344 ms 00:26:13.647 [2024-12-16 20:19:19.613636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.620712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.620796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:13.647 [2024-12-16 20:19:19.620806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.054 ms 00:26:13.647 [2024-12-16 20:19:19.620811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.627751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.647 [2024-12-16 20:19:19.627833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:13.647 [2024-12-16 20:19:19.627844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.891 ms 00:26:13.647 [2024-12-16 20:19:19.627849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.647 [2024-12-16 20:19:19.627869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:13.647 [2024-12-16 20:19:19.627880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:13.647 [2024-12-16 20:19:19.627887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:13.647 [2024-12-16 20:19:19.627893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:13.647 [2024-12-16 20:19:19.627899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.647 [2024-12-16 20:19:19.627955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.648 [2024-12-16 20:19:19.627961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.648 [2024-12-16 20:19:19.627972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.648 [2024-12-16 20:19:19.627978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.648 [2024-12-16 20:19:19.627984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.648 [2024-12-16 20:19:19.627991] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:13.648 [2024-12-16 20:19:19.627997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dbd316a2-e20b-4127-a678-9a1761b2de7f 00:26:13.648 [2024-12-16 20:19:19.628003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:13.648 [2024-12-16 20:19:19.628008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:13.648 [2024-12-16 20:19:19.628014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:13.648 [2024-12-16 20:19:19.628019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:13.648 [2024-12-16 20:19:19.628025] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:13.648 [2024-12-16 20:19:19.628030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:13.648 [2024-12-16 20:19:19.628038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:13.648 [2024-12-16 20:19:19.628043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:13.648 [2024-12-16 20:19:19.628048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:13.648 [2024-12-16 20:19:19.628054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.648 [2024-12-16 20:19:19.628060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:13.648 [2024-12-16 20:19:19.628065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:26:13.648 [2024-12-16 20:19:19.628071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.637882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.648 [2024-12-16 20:19:19.637904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:13.648 [2024-12-16 20:19:19.637913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.798 ms 00:26:13.648 [2024-12-16 20:19:19.637918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.638068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.648 [2024-12-16 20:19:19.638074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:13.648 [2024-12-16 20:19:19.638081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.133 ms 00:26:13.648 [2024-12-16 20:19:19.638086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.672946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.672971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:13.648 [2024-12-16 20:19:19.672979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.672988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.673010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.673016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:13.648 [2024-12-16 20:19:19.673022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.673027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.673074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.673081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:13.648 [2024-12-16 20:19:19.673087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.673092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.673106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.673112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:13.648 [2024-12-16 20:19:19.673117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.673122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.731149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.731281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:13.648 [2024-12-16 20:19:19.731294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.731313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:13.648 [2024-12-16 20:19:19.753588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:13.648 [2024-12-16 20:19:19.753647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:13.648 [2024-12-16 20:19:19.753700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:13.648 [2024-12-16 20:19:19.753789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:13.648 [2024-12-16 20:19:19.753833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:13.648 [2024-12-16 20:19:19.753877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.753915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:13.648 [2024-12-16 20:19:19.753923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:13.648 [2024-12-16 20:19:19.753929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:13.648 [2024-12-16 20:19:19.753935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.648 [2024-12-16 20:19:19.754022] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8667.385 ms, result 0 00:26:16.951 20:19:24 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:16.951 20:19:24 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:16.951 20:19:24 -- ftl/common.sh@81 -- # local base_bdev= 00:26:16.951 20:19:24 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:16.951 20:19:24 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:16.951 20:19:24 -- ftl/common.sh@89 -- # spdk_tgt_pid=78613 00:26:16.951 20:19:24 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:16.951 20:19:24 -- ftl/common.sh@91 -- # waitforlisten 78613 00:26:16.951 20:19:24 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.951 20:19:24 -- common/autotest_common.sh@829 -- # '[' -z 78613 ']' 00:26:16.951 20:19:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.951 20:19:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.951 20:19:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.951 20:19:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.951 20:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:16.951 [2024-12-16 20:19:24.325265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:16.951 [2024-12-16 20:19:24.325559] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78613 ] 00:26:16.951 [2024-12-16 20:19:24.471211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.213 [2024-12-16 20:19:24.608887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:17.213 [2024-12-16 20:19:24.609183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.786 [2024-12-16 20:19:25.134891] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:17.786 [2024-12-16 20:19:25.135288] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:17.786 [2024-12-16 20:19:25.271184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.786 [2024-12-16 20:19:25.271387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:17.786 [2024-12-16 20:19:25.271489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:17.786 [2024-12-16 20:19:25.271529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.786 [2024-12-16 20:19:25.271606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.271705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:17.787 [2024-12-16 20:19:25.271750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:17.787 [2024-12-16 20:19:25.271783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.271830] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:17.787 [2024-12-16 20:19:25.272458] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:17.787 [2024-12-16 20:19:25.272526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.272564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:17.787 [2024-12-16 20:19:25.272603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.701 ms 00:26:17.787 [2024-12-16 20:19:25.272697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.273774] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:17.787 [2024-12-16 20:19:25.283529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.283622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:17.787 [2024-12-16 20:19:25.283722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.756 ms 00:26:17.787 [2024-12-16 20:19:25.283742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.283793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.283885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:17.787 [2024-12-16 20:19:25.283968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:17.787 [2024-12-16 20:19:25.284012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.288317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.288445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:17.787 [2024-12-16 20:19:25.288534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.241 ms 00:26:17.787 [2024-12-16 20:19:25.288642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.288718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.288779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:17.787 [2024-12-16 20:19:25.288858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:17.787 [2024-12-16 20:19:25.288898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.288957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.289031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:17.787 [2024-12-16 20:19:25.289111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:17.787 [2024-12-16 20:19:25.289156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.289207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:17.787 [2024-12-16 20:19:25.291986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.292044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:17.787 [2024-12-16 20:19:25.292088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.787 ms 00:26:17.787 [2024-12-16 20:19:25.292121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.292164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.292251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:17.787 [2024-12-16 20:19:25.292293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:17.787 [2024-12-16 20:19:25.292340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.292382] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:17.787 [2024-12-16 20:19:25.292480] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:17.787 [2024-12-16 20:19:25.292542] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:17.787 [2024-12-16 20:19:25.292657] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:17.787 [2024-12-16 20:19:25.292762] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:17.787 [2024-12-16 20:19:25.292842] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:17.787 [2024-12-16 20:19:25.292882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:17.787 [2024-12-16 20:19:25.292915] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:17.787 [2024-12-16 20:19:25.292984] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:17.787 [2024-12-16 20:19:25.293025] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:17.787 [2024-12-16 20:19:25.293058] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:17.787 [2024-12-16 20:19:25.293089] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:17.787 [2024-12-16 20:19:25.293159] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:17.787 [2024-12-16 20:19:25.293198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.293226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:17.787 [2024-12-16 20:19:25.293258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:26:17.787 [2024-12-16 20:19:25.293341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.293431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.787 [2024-12-16 20:19:25.293507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:17.787 [2024-12-16 20:19:25.293554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:17.787 [2024-12-16 20:19:25.293587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.787 [2024-12-16 20:19:25.293719] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:17.787 [2024-12-16 20:19:25.293767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:17.787 [2024-12-16 20:19:25.293796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:17.787 [2024-12-16 20:19:25.293828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:17.787 [2024-12-16 20:19:25.293865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:17.787 [2024-12-16 20:19:25.293875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:17.787 [2024-12-16 20:19:25.293881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:17.787 [2024-12-16 20:19:25.293886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:17.787 [2024-12-16 20:19:25.293896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:17.787 [2024-12-16 20:19:25.293901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:17.787 [2024-12-16 20:19:25.293911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:17.787 [2024-12-16 20:19:25.293926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:17.787 [2024-12-16 20:19:25.293931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.787 [2024-12-16 20:19:25.293935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:17.787 [2024-12-16 20:19:25.293940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:17.787 [2024-12-16 20:19:25.293945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:17.788 [2024-12-16 20:19:25.293950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:17.788 [2024-12-16 20:19:25.293955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:17.788 [2024-12-16 20:19:25.293960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:17.788 [2024-12-16 20:19:25.293965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:17.788 [2024-12-16 20:19:25.293970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:17.788 [2024-12-16 20:19:25.293974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:17.788 [2024-12-16 20:19:25.293979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:17.788 [2024-12-16 20:19:25.293984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:17.788 [2024-12-16 20:19:25.293989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:17.788 [2024-12-16 20:19:25.293994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:17.788 [2024-12-16 20:19:25.293998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:17.788 [2024-12-16 20:19:25.294003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:17.788 [2024-12-16 20:19:25.294007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:17.788 [2024-12-16 20:19:25.294012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:17.788 [2024-12-16 20:19:25.294017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.788 [2024-12-16 20:19:25.294023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:17.788 [2024-12-16 20:19:25.294029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:17.788 [2024-12-16 20:19:25.294033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.788 [2024-12-16 20:19:25.294038] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:17.788 [2024-12-16 20:19:25.294044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:17.788 [2024-12-16 20:19:25.294049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:17.788 [2024-12-16 20:19:25.294054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.788 [2024-12-16 20:19:25.294060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:17.788 [2024-12-16 20:19:25.294065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:17.788 [2024-12-16 20:19:25.294070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:17.788 [2024-12-16 20:19:25.294075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:17.788 [2024-12-16 20:19:25.294080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:17.788 [2024-12-16 20:19:25.294084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:17.788 [2024-12-16 20:19:25.294090] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:17.788 [2024-12-16 20:19:25.294097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:17.788 [2024-12-16 20:19:25.294111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:17.788 [2024-12-16 20:19:25.294127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:17.788 [2024-12-16 20:19:25.294136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:17.788 [2024-12-16 20:19:25.294142] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:17.788 [2024-12-16 20:19:25.294147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:17.788 [2024-12-16 20:19:25.294174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:17.788 [2024-12-16 20:19:25.294179] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:17.788 [2024-12-16 20:19:25.294185] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.788 [2024-12-16 20:19:25.294196] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:17.788 [2024-12-16 20:19:25.294203] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:17.788 [2024-12-16 20:19:25.294208] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:17.788 [2024-12-16 20:19:25.294214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.294220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:17.788 [2024-12-16 20:19:25.294225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:26:17.788 [2024-12-16 20:19:25.294231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.305736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.305762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:17.788 [2024-12-16 20:19:25.305770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.466 ms 00:26:17.788 [2024-12-16 20:19:25.305776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.305802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.305809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:17.788 [2024-12-16 20:19:25.305814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:17.788 [2024-12-16 20:19:25.305820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.329638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.329662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:17.788 [2024-12-16 20:19:25.329670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.782 ms 00:26:17.788 [2024-12-16 20:19:25.329677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.329696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.329703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:17.788 [2024-12-16 20:19:25.329710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:17.788 [2024-12-16 20:19:25.329716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.330017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.330031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:17.788 [2024-12-16 20:19:25.330038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:26:17.788 [2024-12-16 20:19:25.330043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.330073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.788 [2024-12-16 20:19:25.330078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:17.788 [2024-12-16 20:19:25.330084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:17.788 [2024-12-16 20:19:25.330090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.788 [2024-12-16 20:19:25.341880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.789 [2024-12-16 20:19:25.341905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:17.789 [2024-12-16 20:19:25.341913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.775 ms 00:26:17.789 [2024-12-16 20:19:25.341919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.789 [2024-12-16 20:19:25.351674] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:17.789 [2024-12-16 20:19:25.351700] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:17.789 [2024-12-16 20:19:25.351708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.789 [2024-12-16 20:19:25.351713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:17.789 [2024-12-16 20:19:25.351720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.715 ms 00:26:17.789 [2024-12-16 20:19:25.351731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.789 [2024-12-16 20:19:25.362036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.789 [2024-12-16 20:19:25.362059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:17.789 [2024-12-16 20:19:25.362068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.276 ms 00:26:17.789 [2024-12-16 20:19:25.362074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.789 [2024-12-16 20:19:25.370787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.789 [2024-12-16 20:19:25.370811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:17.789 [2024-12-16 20:19:25.370818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.690 ms 00:26:17.789 [2024-12-16 20:19:25.370823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.789 [2024-12-16 20:19:25.379330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.789 [2024-12-16 20:19:25.379354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:17.789 [2024-12-16 20:19:25.379361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.481 ms 00:26:17.789 [2024-12-16 20:19:25.379367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.789 [2024-12-16 20:19:25.379639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.789 [2024-12-16 20:19:25.379648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:17.789 [2024-12-16 20:19:25.379654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:26:17.789 [2024-12-16 20:19:25.379660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.789 [2024-12-16 20:19:25.425271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.425392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:18.050 [2024-12-16 20:19:25.425405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.597 ms 00:26:18.050 [2024-12-16 20:19:25.425412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.433219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:18.050 [2024-12-16 20:19:25.433781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.433803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:18.050 [2024-12-16 20:19:25.433811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.342 ms 00:26:18.050 [2024-12-16 20:19:25.433820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.433862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.433869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:18.050 [2024-12-16 20:19:25.433876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:18.050 [2024-12-16 20:19:25.433881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.433912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.433920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:18.050 [2024-12-16 20:19:25.433927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:18.050 [2024-12-16 20:19:25.433932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.434883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.434908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:18.050 [2024-12-16 20:19:25.434915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.935 ms 00:26:18.050 [2024-12-16 20:19:25.434921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.434939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.434946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:18.050 [2024-12-16 20:19:25.434951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:18.050 [2024-12-16 20:19:25.434957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.434984] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:18.050 [2024-12-16 20:19:25.434992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.434999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:18.050 [2024-12-16 20:19:25.435005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:18.050 [2024-12-16 20:19:25.435011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.452558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.452655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:18.050 [2024-12-16 20:19:25.452668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.533 ms 00:26:18.050 [2024-12-16 20:19:25.452674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.452730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.050 [2024-12-16 20:19:25.452737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:18.050 [2024-12-16 20:19:25.452743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:18.050 [2024-12-16 20:19:25.452748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.050 [2024-12-16 20:19:25.453492] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 181.999 ms, result 0 00:26:18.050 [2024-12-16 20:19:25.468881] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.050 [2024-12-16 20:19:25.484873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:18.050 [2024-12-16 20:19:25.492975] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:18.311 20:19:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.311 20:19:25 -- common/autotest_common.sh@862 -- # return 0 00:26:18.311 20:19:25 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:18.311 20:19:25 -- ftl/common.sh@95 -- # return 0 00:26:18.312 20:19:25 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:18.589 [2024-12-16 20:19:25.969864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.589 [2024-12-16 20:19:25.969895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:18.589 [2024-12-16 20:19:25.969904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:18.589 [2024-12-16 20:19:25.969910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.589 [2024-12-16 20:19:25.969927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.589 [2024-12-16 20:19:25.969933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:18.589 [2024-12-16 20:19:25.969939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:18.589 [2024-12-16 20:19:25.969947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.589 [2024-12-16 20:19:25.969962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.589 [2024-12-16 20:19:25.969968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:18.589 [2024-12-16 20:19:25.969974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:18.589 [2024-12-16 20:19:25.969980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.589 [2024-12-16 20:19:25.970020] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.150 ms, result 0 00:26:18.589 true 00:26:18.589 20:19:25 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:18.589 { 00:26:18.589 "name": "ftl", 00:26:18.589 "properties": [ 00:26:18.589 { 00:26:18.589 "name": "superblock_version", 00:26:18.589 "value": 5, 00:26:18.589 "read-only": true 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "name": "base_device", 00:26:18.589 "bands": [ 00:26:18.589 { 00:26:18.589 "id": 0, 00:26:18.589 "state": "CLOSED", 00:26:18.589 "validity": 1.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 1, 00:26:18.589 "state": "CLOSED", 00:26:18.589 "validity": 1.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 2, 00:26:18.589 "state": "CLOSED", 00:26:18.589 "validity": 0.007843137254901933 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 3, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 4, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 5, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 6, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 7, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 8, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 9, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 10, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 11, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 12, 00:26:18.589 "state": "FREE", 00:26:18.589 "validity": 0.0 00:26:18.589 }, 00:26:18.589 { 00:26:18.589 "id": 13, 00:26:18.589 "state": "FREE", 00:26:18.590 "validity": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 14, 00:26:18.590 "state": "FREE", 00:26:18.590 "validity": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 15, 00:26:18.590 "state": "FREE", 00:26:18.590 "validity": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 16, 00:26:18.590 "state": "FREE", 00:26:18.590 "validity": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 17, 00:26:18.590 "state": "FREE", 00:26:18.590 "validity": 0.0 00:26:18.590 } 00:26:18.590 ], 00:26:18.590 "read-only": true 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "name": "cache_device", 00:26:18.590 "type": "bdev", 00:26:18.590 "chunks": [ 00:26:18.590 { 00:26:18.590 "id": 0, 00:26:18.590 "state": "OPEN", 00:26:18.590 "utilization": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 1, 00:26:18.590 "state": "OPEN", 00:26:18.590 "utilization": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 2, 00:26:18.590 "state": "FREE", 00:26:18.590 "utilization": 0.0 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "id": 3, 00:26:18.590 "state": "FREE", 00:26:18.590 "utilization": 0.0 00:26:18.590 } 00:26:18.590 ], 00:26:18.590 "read-only": true 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "name": "verbose_mode", 00:26:18.590 "value": true, 00:26:18.590 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:18.590 }, 00:26:18.590 { 00:26:18.590 "name": "prep_upgrade_on_shutdown", 00:26:18.590 "value": false, 00:26:18.590 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:18.590 } 00:26:18.590 ] 00:26:18.590 } 00:26:18.590 20:19:26 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:18.590 20:19:26 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:18.590 20:19:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:18.887 20:19:26 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:18.887 20:19:26 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:18.887 20:19:26 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:18.887 20:19:26 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:18.887 20:19:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:19.155 Validate MD5 checksum, iteration 1 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:19.155 20:19:26 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:19.155 20:19:26 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:19.155 20:19:26 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:19.155 20:19:26 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:19.155 20:19:26 -- ftl/common.sh@154 -- # return 0 00:26:19.155 20:19:26 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:19.155 [2024-12-16 20:19:26.614446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:19.155 [2024-12-16 20:19:26.614674] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78652 ] 00:26:19.155 [2024-12-16 20:19:26.764061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.416 [2024-12-16 20:19:26.935147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.803  [2024-12-16T20:19:29.016Z] Copying: 770/1024 [MB] (770 MBps) [2024-12-16T20:19:34.309Z] Copying: 1024/1024 [MB] (average 730 MBps) 00:26:26.669 00:26:26.669 20:19:33 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:26.669 20:19:33 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@103 -- # sum=4986e0df62bef4c676ee4ee89ba399fc 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@105 -- # [[ 4986e0df62bef4c676ee4ee89ba399fc != \4\9\8\6\e\0\d\f\6\2\b\e\f\4\c\6\7\6\e\e\4\e\e\8\9\b\a\3\9\9\f\c ]] 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:27.613 Validate MD5 checksum, iteration 2 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:27.613 20:19:35 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:27.613 20:19:35 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:27.613 20:19:35 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:27.613 20:19:35 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:27.613 20:19:35 -- ftl/common.sh@154 -- # return 0 00:26:27.613 20:19:35 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:27.873 [2024-12-16 20:19:35.269944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:27.873 [2024-12-16 20:19:35.270162] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78748 ] 00:26:27.873 [2024-12-16 20:19:35.418107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.135 [2024-12-16 20:19:35.588262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.521  [2024-12-16T20:19:37.734Z] Copying: 626/1024 [MB] (626 MBps) [2024-12-16T20:19:38.677Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:26:31.037 00:26:31.037 20:19:38 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:31.037 20:19:38 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:33.585 20:19:40 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:33.586 20:19:40 -- ftl/upgrade_shutdown.sh@103 -- # sum=8db0cf07393cc82fe4430dc8f4f16bb8 00:26:33.586 20:19:40 -- ftl/upgrade_shutdown.sh@105 -- # [[ 8db0cf07393cc82fe4430dc8f4f16bb8 != \8\d\b\0\c\f\0\7\3\9\3\c\c\8\2\f\e\4\4\3\0\d\c\8\f\4\f\1\6\b\b\8 ]] 00:26:33.586 20:19:40 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:33.586 20:19:40 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:33.586 20:19:40 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:33.586 20:19:40 -- ftl/common.sh@137 -- # [[ -n 78613 ]] 00:26:33.586 20:19:40 -- ftl/common.sh@138 -- # kill -9 78613 00:26:33.586 20:19:40 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:33.586 20:19:40 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:33.586 20:19:40 -- ftl/common.sh@81 -- # local base_bdev= 00:26:33.586 20:19:40 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:33.586 20:19:40 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:33.586 20:19:40 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:33.586 20:19:40 -- ftl/common.sh@89 -- # spdk_tgt_pid=78809 00:26:33.586 20:19:40 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:33.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.586 20:19:40 -- ftl/common.sh@91 -- # waitforlisten 78809 00:26:33.586 20:19:40 -- common/autotest_common.sh@829 -- # '[' -z 78809 ']' 00:26:33.586 20:19:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.586 20:19:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.586 20:19:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.586 20:19:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.586 20:19:40 -- common/autotest_common.sh@10 -- # set +x 00:26:33.586 [2024-12-16 20:19:40.784953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:33.586 [2024-12-16 20:19:40.785062] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78809 ] 00:26:33.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 78613 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:33.586 [2024-12-16 20:19:40.932130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.586 [2024-12-16 20:19:41.075361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:33.586 [2024-12-16 20:19:41.075506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.158 [2024-12-16 20:19:41.600317] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:34.158 [2024-12-16 20:19:41.600364] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:34.158 [2024-12-16 20:19:41.736635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.736668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:34.158 [2024-12-16 20:19:41.736680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:34.158 [2024-12-16 20:19:41.736686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.736725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.736734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:34.158 [2024-12-16 20:19:41.736740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:34.158 [2024-12-16 20:19:41.736746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.736760] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:34.158 [2024-12-16 20:19:41.737296] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:34.158 [2024-12-16 20:19:41.737323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.737329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:34.158 [2024-12-16 20:19:41.737335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:26:34.158 [2024-12-16 20:19:41.737341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.737532] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:34.158 [2024-12-16 20:19:41.749965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.749993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:34.158 [2024-12-16 20:19:41.750002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.433 ms 00:26:34.158 [2024-12-16 20:19:41.750008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.756726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.756751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:34.158 [2024-12-16 20:19:41.756758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:34.158 [2024-12-16 20:19:41.756764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.756998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.757006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:34.158 [2024-12-16 20:19:41.757013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:26:34.158 [2024-12-16 20:19:41.757018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.757042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.757048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:34.158 [2024-12-16 20:19:41.757054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:34.158 [2024-12-16 20:19:41.757061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.757078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.757084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:34.158 [2024-12-16 20:19:41.757090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:34.158 [2024-12-16 20:19:41.757095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.757113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:34.158 [2024-12-16 20:19:41.759509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.759652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:34.158 [2024-12-16 20:19:41.759665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.402 ms 00:26:34.158 [2024-12-16 20:19:41.759671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.158 [2024-12-16 20:19:41.759695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.158 [2024-12-16 20:19:41.759702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:34.158 [2024-12-16 20:19:41.759710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:34.158 [2024-12-16 20:19:41.759716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.159 [2024-12-16 20:19:41.759733] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:34.159 [2024-12-16 20:19:41.759746] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:34.159 [2024-12-16 20:19:41.759772] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:34.159 [2024-12-16 20:19:41.759783] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:34.159 [2024-12-16 20:19:41.759841] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:34.159 [2024-12-16 20:19:41.759850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:34.159 [2024-12-16 20:19:41.759860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:34.159 [2024-12-16 20:19:41.759867] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:34.159 [2024-12-16 20:19:41.759873] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:34.159 [2024-12-16 20:19:41.759879] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:34.159 [2024-12-16 20:19:41.759885] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:34.159 [2024-12-16 20:19:41.759890] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:34.159 [2024-12-16 20:19:41.759895] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:34.159 [2024-12-16 20:19:41.759900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.159 [2024-12-16 20:19:41.759906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:34.159 [2024-12-16 20:19:41.759914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:26:34.159 [2024-12-16 20:19:41.759925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.159 [2024-12-16 20:19:41.759978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.159 [2024-12-16 20:19:41.759984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:34.159 [2024-12-16 20:19:41.759990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:34.159 [2024-12-16 20:19:41.759996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.159 [2024-12-16 20:19:41.760052] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:34.159 [2024-12-16 20:19:41.760059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:34.159 [2024-12-16 20:19:41.760064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:34.159 [2024-12-16 20:19:41.760084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:34.159 [2024-12-16 20:19:41.760094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:34.159 [2024-12-16 20:19:41.760100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:34.159 [2024-12-16 20:19:41.760105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760110] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:34.159 [2024-12-16 20:19:41.760115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:34.159 [2024-12-16 20:19:41.760120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:34.159 [2024-12-16 20:19:41.760130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:34.159 [2024-12-16 20:19:41.760145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:34.159 [2024-12-16 20:19:41.760149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:34.159 [2024-12-16 20:19:41.760159] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:34.159 [2024-12-16 20:19:41.760164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:34.159 [2024-12-16 20:19:41.760174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:34.159 [2024-12-16 20:19:41.760179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:34.159 [2024-12-16 20:19:41.760189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:34.159 [2024-12-16 20:19:41.760194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:34.159 [2024-12-16 20:19:41.760203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:34.159 [2024-12-16 20:19:41.760208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:34.159 [2024-12-16 20:19:41.760218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:34.159 [2024-12-16 20:19:41.760222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:34.159 [2024-12-16 20:19:41.760231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:34.159 [2024-12-16 20:19:41.760236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:34.159 [2024-12-16 20:19:41.760248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760257] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:34.159 [2024-12-16 20:19:41.760263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:34.159 [2024-12-16 20:19:41.760269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:34.159 [2024-12-16 20:19:41.760280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:34.159 [2024-12-16 20:19:41.760285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:34.159 [2024-12-16 20:19:41.760290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:34.159 [2024-12-16 20:19:41.760295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:34.159 [2024-12-16 20:19:41.760316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:34.159 [2024-12-16 20:19:41.760321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:34.159 [2024-12-16 20:19:41.760327] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:34.159 [2024-12-16 20:19:41.760335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:34.159 [2024-12-16 20:19:41.760347] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:34.159 [2024-12-16 20:19:41.760368] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:34.159 [2024-12-16 20:19:41.760373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:34.159 [2024-12-16 20:19:41.760379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:34.159 [2024-12-16 20:19:41.760384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:34.159 [2024-12-16 20:19:41.760411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:34.159 [2024-12-16 20:19:41.760416] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:34.159 [2024-12-16 20:19:41.760422] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:34.159 [2024-12-16 20:19:41.760433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:34.159 [2024-12-16 20:19:41.760440] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:34.159 [2024-12-16 20:19:41.760445] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:34.159 [2024-12-16 20:19:41.760451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.159 [2024-12-16 20:19:41.760457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:34.159 [2024-12-16 20:19:41.760463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:26:34.159 [2024-12-16 20:19:41.760470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.159 [2024-12-16 20:19:41.771286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.159 [2024-12-16 20:19:41.771377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:34.159 [2024-12-16 20:19:41.771425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.783 ms 00:26:34.159 [2024-12-16 20:19:41.771442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.159 [2024-12-16 20:19:41.771480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.159 [2024-12-16 20:19:41.771575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:34.160 [2024-12-16 20:19:41.771611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:34.160 [2024-12-16 20:19:41.771625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.160 [2024-12-16 20:19:41.795582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.160 [2024-12-16 20:19:41.795671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:34.160 [2024-12-16 20:19:41.795712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.911 ms 00:26:34.160 [2024-12-16 20:19:41.795730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.160 [2024-12-16 20:19:41.795768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.160 [2024-12-16 20:19:41.795784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:34.160 [2024-12-16 20:19:41.795799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:34.160 [2024-12-16 20:19:41.795814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.160 [2024-12-16 20:19:41.795887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.160 [2024-12-16 20:19:41.795906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:34.160 [2024-12-16 20:19:41.795922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:34.160 [2024-12-16 20:19:41.795965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.160 [2024-12-16 20:19:41.796007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.160 [2024-12-16 20:19:41.796025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:34.160 [2024-12-16 20:19:41.796040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:34.160 [2024-12-16 20:19:41.796054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.421 [2024-12-16 20:19:41.807885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.421 [2024-12-16 20:19:41.807971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:34.421 [2024-12-16 20:19:41.808009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.807 ms 00:26:34.421 [2024-12-16 20:19:41.808027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.421 [2024-12-16 20:19:41.808109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.421 [2024-12-16 20:19:41.808129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:34.421 [2024-12-16 20:19:41.808144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:34.421 [2024-12-16 20:19:41.808158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.421 [2024-12-16 20:19:41.820763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.421 [2024-12-16 20:19:41.820850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:34.421 [2024-12-16 20:19:41.820888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.580 ms 00:26:34.421 [2024-12-16 20:19:41.820905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.421 [2024-12-16 20:19:41.827824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.421 [2024-12-16 20:19:41.827902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:34.422 [2024-12-16 20:19:41.827940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:26:34.422 [2024-12-16 20:19:41.827957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.872330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.422 [2024-12-16 20:19:41.872436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:34.422 [2024-12-16 20:19:41.872474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 44.326 ms 00:26:34.422 [2024-12-16 20:19:41.872491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.872554] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:34.422 [2024-12-16 20:19:41.872610] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:34.422 [2024-12-16 20:19:41.872656] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:34.422 [2024-12-16 20:19:41.872701] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:34.422 [2024-12-16 20:19:41.872724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.422 [2024-12-16 20:19:41.872766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:34.422 [2024-12-16 20:19:41.872814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:26:34.422 [2024-12-16 20:19:41.872832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.872913] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:34.422 [2024-12-16 20:19:41.872942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.422 [2024-12-16 20:19:41.872957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:34.422 [2024-12-16 20:19:41.872973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:34.422 [2024-12-16 20:19:41.872987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.884141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.422 [2024-12-16 20:19:41.884226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:34.422 [2024-12-16 20:19:41.884268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.101 ms 00:26:34.422 [2024-12-16 20:19:41.884284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.890796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.422 [2024-12-16 20:19:41.890870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:34.422 [2024-12-16 20:19:41.890907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:34.422 [2024-12-16 20:19:41.890927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.890972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.422 [2024-12-16 20:19:41.891091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:26:34.422 [2024-12-16 20:19:41.891109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:34.422 [2024-12-16 20:19:41.891124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.422 [2024-12-16 20:19:41.891242] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:26:34.994 [2024-12-16 20:19:42.429677] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:26:34.994 [2024-12-16 20:19:42.429958] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:26:35.567 [2024-12-16 20:19:43.001481] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:26:35.567 [2024-12-16 20:19:43.001671] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:35.567 [2024-12-16 20:19:43.001741] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:35.567 [2024-12-16 20:19:43.001797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.001816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:35.567 [2024-12-16 20:19:43.001835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1110.643 ms 00:26:35.567 [2024-12-16 20:19:43.001850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.001898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.002004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:35.567 [2024-12-16 20:19:43.002025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:35.567 [2024-12-16 20:19:43.002040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.011026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:35.567 [2024-12-16 20:19:43.011203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.011264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:35.567 [2024-12-16 20:19:43.011320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.140 ms 00:26:35.567 [2024-12-16 20:19:43.011340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.011878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.011956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:26:35.567 [2024-12-16 20:19:43.012003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.472 ms 00:26:35.567 [2024-12-16 20:19:43.012020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.013732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.013816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:35.567 [2024-12-16 20:19:43.013861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.687 ms 00:26:35.567 [2024-12-16 20:19:43.013878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.032630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.032727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:26:35.567 [2024-12-16 20:19:43.032765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.722 ms 00:26:35.567 [2024-12-16 20:19:43.032782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.032865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.032886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:35.567 [2024-12-16 20:19:43.032901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:35.567 [2024-12-16 20:19:43.032915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.033882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.033967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:35.567 [2024-12-16 20:19:43.034005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:26:35.567 [2024-12-16 20:19:43.034022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.034055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.034071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:35.567 [2024-12-16 20:19:43.034109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:35.567 [2024-12-16 20:19:43.034125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.034160] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:35.567 [2024-12-16 20:19:43.034178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.034213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:35.567 [2024-12-16 20:19:43.034230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:35.567 [2024-12-16 20:19:43.034319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.034423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.567 [2024-12-16 20:19:43.034444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:35.567 [2024-12-16 20:19:43.034460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:35.567 [2024-12-16 20:19:43.034474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.567 [2024-12-16 20:19:43.035285] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1298.326 ms, result 0 00:26:35.567 [2024-12-16 20:19:43.049004] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.567 [2024-12-16 20:19:43.064997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:35.567 [2024-12-16 20:19:43.073095] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:36.140 20:19:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.140 20:19:43 -- common/autotest_common.sh@862 -- # return 0 00:26:36.140 20:19:43 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:36.140 20:19:43 -- ftl/common.sh@95 -- # return 0 00:26:36.140 Validate MD5 checksum, iteration 1 00:26:36.140 20:19:43 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:36.140 20:19:43 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:36.140 20:19:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:36.140 20:19:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:36.140 20:19:43 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:36.140 20:19:43 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:36.140 20:19:43 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:36.140 20:19:43 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:36.140 20:19:43 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:36.140 20:19:43 -- ftl/common.sh@154 -- # return 0 00:26:36.140 20:19:43 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:36.141 [2024-12-16 20:19:43.677150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:36.141 [2024-12-16 20:19:43.677402] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78853 ] 00:26:36.402 [2024-12-16 20:19:43.823431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.402 [2024-12-16 20:19:43.961771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.788  [2024-12-16T20:19:46.371Z] Copying: 557/1024 [MB] (557 MBps) [2024-12-16T20:19:47.758Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:26:40.118 00:26:40.118 20:19:47 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:40.118 20:19:47 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:42.033 Validate MD5 checksum, iteration 2 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@103 -- # sum=4986e0df62bef4c676ee4ee89ba399fc 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@105 -- # [[ 4986e0df62bef4c676ee4ee89ba399fc != \4\9\8\6\e\0\d\f\6\2\b\e\f\4\c\6\7\6\e\e\4\e\e\8\9\b\a\3\9\9\f\c ]] 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:42.033 20:19:49 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:42.033 20:19:49 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:42.033 20:19:49 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:42.033 20:19:49 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:42.033 20:19:49 -- ftl/common.sh@154 -- # return 0 00:26:42.033 20:19:49 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:42.033 [2024-12-16 20:19:49.638465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.033 [2024-12-16 20:19:49.638788] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78920 ] 00:26:42.294 [2024-12-16 20:19:49.789026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.555 [2024-12-16 20:19:49.984975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.942  [2024-12-16T20:19:52.177Z] Copying: 737/1024 [MB] (737 MBps) [2024-12-16T20:19:56.393Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:26:48.753 00:26:48.753 20:19:55 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:48.753 20:19:55 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@103 -- # sum=8db0cf07393cc82fe4430dc8f4f16bb8 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@105 -- # [[ 8db0cf07393cc82fe4430dc8f4f16bb8 != \8\d\b\0\c\f\0\7\3\9\3\c\c\8\2\f\e\4\4\3\0\d\c\8\f\4\f\1\6\b\b\8 ]] 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:50.139 20:19:57 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:50.400 20:19:57 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:50.400 20:19:57 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:50.400 20:19:57 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:50.400 20:19:57 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:50.400 20:19:57 -- ftl/common.sh@130 -- # [[ -n 78809 ]] 00:26:50.400 20:19:57 -- ftl/common.sh@131 -- # killprocess 78809 00:26:50.400 20:19:57 -- common/autotest_common.sh@936 -- # '[' -z 78809 ']' 00:26:50.400 20:19:57 -- common/autotest_common.sh@940 -- # kill -0 78809 00:26:50.400 20:19:57 -- common/autotest_common.sh@941 -- # uname 00:26:50.400 20:19:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:50.400 20:19:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78809 00:26:50.400 killing process with pid 78809 00:26:50.400 20:19:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:50.400 20:19:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:50.401 20:19:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78809' 00:26:50.401 20:19:57 -- common/autotest_common.sh@955 -- # kill 78809 00:26:50.401 20:19:57 -- common/autotest_common.sh@960 -- # wait 78809 00:26:50.974 [2024-12-16 20:19:58.356768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:50.974 [2024-12-16 20:19:58.368604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.368656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:50.974 [2024-12-16 20:19:58.368667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:50.974 [2024-12-16 20:19:58.368673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.368692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:50.974 [2024-12-16 20:19:58.370909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.370928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:50.974 [2024-12-16 20:19:58.370937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.206 ms 00:26:50.974 [2024-12-16 20:19:58.370943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.371142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.371149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:50.974 [2024-12-16 20:19:58.371156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:26:50.974 [2024-12-16 20:19:58.371161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.372139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.372150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:50.974 [2024-12-16 20:19:58.372157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.966 ms 00:26:50.974 [2024-12-16 20:19:58.372163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.373129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.373210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:50.974 [2024-12-16 20:19:58.373257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:26:50.974 [2024-12-16 20:19:58.373275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.381004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.381145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:50.974 [2024-12-16 20:19:58.381197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.680 ms 00:26:50.974 [2024-12-16 20:19:58.381215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.385505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.385601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:50.974 [2024-12-16 20:19:58.385647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.217 ms 00:26:50.974 [2024-12-16 20:19:58.385665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.385734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.385847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:50.974 [2024-12-16 20:19:58.385874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:50.974 [2024-12-16 20:19:58.385889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.392920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.393013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:50.974 [2024-12-16 20:19:58.393064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.005 ms 00:26:50.974 [2024-12-16 20:19:58.393080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.400373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.400482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:50.974 [2024-12-16 20:19:58.400531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.079 ms 00:26:50.974 [2024-12-16 20:19:58.400549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.407754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.407850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:50.974 [2024-12-16 20:19:58.407901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.171 ms 00:26:50.974 [2024-12-16 20:19:58.407918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.415055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.974 [2024-12-16 20:19:58.415141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:50.974 [2024-12-16 20:19:58.415192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.073 ms 00:26:50.974 [2024-12-16 20:19:58.415208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.974 [2024-12-16 20:19:58.415239] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:50.974 [2024-12-16 20:19:58.415260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:50.974 [2024-12-16 20:19:58.415288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:50.974 [2024-12-16 20:19:58.415330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:50.974 [2024-12-16 20:19:58.415353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.974 [2024-12-16 20:19:58.415473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.974 [2024-12-16 20:19:58.415500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.975 [2024-12-16 20:19:58.415887] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:50.975 [2024-12-16 20:19:58.415902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dbd316a2-e20b-4127-a678-9a1761b2de7f 00:26:50.975 [2024-12-16 20:19:58.415945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:50.975 [2024-12-16 20:19:58.415960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:50.975 [2024-12-16 20:19:58.415973] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:50.975 [2024-12-16 20:19:58.415987] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:50.975 [2024-12-16 20:19:58.416001] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:50.975 [2024-12-16 20:19:58.416033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:50.975 [2024-12-16 20:19:58.416049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:50.975 [2024-12-16 20:19:58.416062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:50.975 [2024-12-16 20:19:58.416076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:50.975 [2024-12-16 20:19:58.416090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.975 [2024-12-16 20:19:58.416105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:50.975 [2024-12-16 20:19:58.416138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.851 ms 00:26:50.975 [2024-12-16 20:19:58.416157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.425770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.975 [2024-12-16 20:19:58.425855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:50.975 [2024-12-16 20:19:58.425892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.586 ms 00:26:50.975 [2024-12-16 20:19:58.425909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.426064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.975 [2024-12-16 20:19:58.426090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:50.975 [2024-12-16 20:19:58.426130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.130 ms 00:26:50.975 [2024-12-16 20:19:58.426147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.461606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.461703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:50.975 [2024-12-16 20:19:58.461742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.461759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.461792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.461812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:50.975 [2024-12-16 20:19:58.461826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.461840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.461899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.461918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:50.975 [2024-12-16 20:19:58.461967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.461983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.462006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.462021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:50.975 [2024-12-16 20:19:58.462036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.462053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.519405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.519525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:50.975 [2024-12-16 20:19:58.519566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.519584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.542136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:50.975 [2024-12-16 20:19:58.542180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.542196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.542267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:50.975 [2024-12-16 20:19:58.542282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.542332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.542396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:50.975 [2024-12-16 20:19:58.542441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.542457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.542563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:50.975 [2024-12-16 20:19:58.542577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.542591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.542714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:50.975 [2024-12-16 20:19:58.542728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.542742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.542798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:50.975 [2024-12-16 20:19:58.542812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.542913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.542964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:50.975 [2024-12-16 20:19:58.543012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:50.975 [2024-12-16 20:19:58.543029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:50.975 [2024-12-16 20:19:58.543089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.975 [2024-12-16 20:19:58.543206] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 174.578 ms, result 0 00:26:51.918 20:19:59 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:51.918 20:19:59 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:51.918 20:19:59 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:51.918 20:19:59 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:51.918 20:19:59 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:51.918 20:19:59 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:51.918 20:19:59 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:51.918 Remove shared memory files 00:26:51.918 20:19:59 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:51.918 20:19:59 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:51.918 20:19:59 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:51.918 20:19:59 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78613 00:26:51.918 20:19:59 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:51.918 20:19:59 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:51.918 ************************************ 00:26:51.918 END TEST ftl_upgrade_shutdown 00:26:51.918 ************************************ 00:26:51.918 00:26:51.918 real 1m25.015s 00:26:51.918 user 1m58.198s 00:26:51.918 sys 0m18.839s 00:26:51.918 20:19:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:51.918 20:19:59 -- common/autotest_common.sh@10 -- # set +x 00:26:51.918 20:19:59 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:26:51.918 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:26:51.918 20:19:59 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:26:51.918 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:26:51.918 20:19:59 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:26:51.918 20:19:59 -- ftl/ftl.sh@14 -- # killprocess 70432 00:26:51.918 20:19:59 -- common/autotest_common.sh@936 -- # '[' -z 70432 ']' 00:26:51.918 20:19:59 -- common/autotest_common.sh@940 -- # kill -0 70432 00:26:51.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70432) - No such process 00:26:51.918 Process with pid 70432 is not found 00:26:51.918 20:19:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70432 is not found' 00:26:51.918 20:19:59 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:26:51.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.918 20:19:59 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79051 00:26:51.918 20:19:59 -- ftl/ftl.sh@20 -- # waitforlisten 79051 00:26:51.919 20:19:59 -- common/autotest_common.sh@829 -- # '[' -z 79051 ']' 00:26:51.919 20:19:59 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:51.919 20:19:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.919 20:19:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.919 20:19:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.919 20:19:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.919 20:19:59 -- common/autotest_common.sh@10 -- # set +x 00:26:51.919 [2024-12-16 20:19:59.315796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:51.919 [2024-12-16 20:19:59.315990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79051 ] 00:26:51.919 [2024-12-16 20:19:59.456793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.180 [2024-12-16 20:19:59.598013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:52.180 [2024-12-16 20:19:59.598318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.750 20:20:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.750 20:20:00 -- common/autotest_common.sh@862 -- # return 0 00:26:52.750 20:20:00 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:26:52.750 nvme0n1 00:26:52.750 20:20:00 -- ftl/ftl.sh@22 -- # clear_lvols 00:26:52.750 20:20:00 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:52.750 20:20:00 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:53.011 20:20:00 -- ftl/common.sh@28 -- # stores=8fee2bce-16b6-4cf1-835c-1fb8a97a5fb9 00:26:53.011 20:20:00 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:53.011 20:20:00 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fee2bce-16b6-4cf1-835c-1fb8a97a5fb9 00:26:53.273 20:20:00 -- ftl/ftl.sh@23 -- # killprocess 79051 00:26:53.273 20:20:00 -- common/autotest_common.sh@936 -- # '[' -z 79051 ']' 00:26:53.273 20:20:00 -- common/autotest_common.sh@940 -- # kill -0 79051 00:26:53.273 20:20:00 -- common/autotest_common.sh@941 -- # uname 00:26:53.273 20:20:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:53.273 20:20:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79051 00:26:53.273 killing process with pid 79051 00:26:53.273 20:20:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:53.273 20:20:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:53.273 20:20:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79051' 00:26:53.273 20:20:00 -- common/autotest_common.sh@955 -- # kill 79051 00:26:53.273 20:20:00 -- common/autotest_common.sh@960 -- # wait 79051 00:26:54.655 20:20:01 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:54.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:54.655 Waiting for block devices as requested 00:26:54.655 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:26:54.655 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:26:54.915 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:54.915 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:00.205 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:00.205 20:20:07 -- ftl/ftl.sh@28 -- # remove_shm 00:27:00.205 Remove shared memory files 00:27:00.205 20:20:07 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:00.205 20:20:07 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:00.205 20:20:07 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:00.205 20:20:07 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:00.205 20:20:07 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:00.205 20:20:07 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:00.205 ************************************ 00:27:00.205 END TEST ftl 00:27:00.205 ************************************ 00:27:00.205 00:27:00.205 real 12m24.093s 00:27:00.205 user 14m21.302s 00:27:00.205 sys 1m26.145s 00:27:00.205 20:20:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.205 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:27:00.205 20:20:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:00.205 20:20:07 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:00.205 20:20:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:00.205 20:20:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:00.205 20:20:07 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:00.205 20:20:07 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:00.205 20:20:07 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:00.205 20:20:07 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:00.205 20:20:07 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:00.205 20:20:07 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:00.205 20:20:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:00.205 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:27:00.205 20:20:07 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:00.205 20:20:07 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:00.205 20:20:07 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:00.205 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:27:01.149 INFO: APP EXITING 00:27:01.149 INFO: killing all VMs 00:27:01.149 INFO: killing vhost app 00:27:01.149 INFO: EXIT DONE 00:27:01.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:01.982 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:01.982 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:01.982 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:01.982 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:02.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:02.817 Cleaning 00:27:02.817 Removing: /var/run/dpdk/spdk0/config 00:27:02.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:02.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:02.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:02.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:02.817 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:02.817 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:02.817 Removing: /var/run/dpdk/spdk0 00:27:02.817 Removing: /var/run/dpdk/spdk_pid55962 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56154 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56465 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56558 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56653 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56765 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56850 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56895 00:27:02.817 Removing: /var/run/dpdk/spdk_pid56937 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57001 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57085 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57509 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57575 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57640 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57651 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57749 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57765 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57864 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57882 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57935 00:27:02.817 Removing: /var/run/dpdk/spdk_pid57953 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58006 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58031 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58187 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58229 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58311 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58389 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58420 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58487 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58513 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58554 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58580 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58621 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58642 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58683 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58703 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58744 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58770 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58811 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58837 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58878 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58904 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58951 00:27:02.817 Removing: /var/run/dpdk/spdk_pid58977 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59018 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59046 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59088 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59107 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59144 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59172 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59213 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59239 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59280 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59306 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59347 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59368 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59409 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59435 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59476 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59502 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59548 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59580 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59624 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59648 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59692 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59718 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59759 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59785 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59827 00:27:02.817 Removing: /var/run/dpdk/spdk_pid59905 00:27:02.817 Removing: /var/run/dpdk/spdk_pid60017 00:27:02.817 Removing: /var/run/dpdk/spdk_pid60187 00:27:02.817 Removing: /var/run/dpdk/spdk_pid60260 00:27:02.817 Removing: /var/run/dpdk/spdk_pid60302 00:27:02.817 Removing: /var/run/dpdk/spdk_pid60748 00:27:02.817 Removing: /var/run/dpdk/spdk_pid60959 00:27:02.817 Removing: /var/run/dpdk/spdk_pid61073 00:27:02.817 Removing: /var/run/dpdk/spdk_pid61122 00:27:02.817 Removing: /var/run/dpdk/spdk_pid61152 00:27:02.817 Removing: /var/run/dpdk/spdk_pid61235 00:27:02.817 Removing: /var/run/dpdk/spdk_pid61889 00:27:02.817 Removing: /var/run/dpdk/spdk_pid61930 00:27:02.817 Removing: /var/run/dpdk/spdk_pid62404 00:27:02.817 Removing: /var/run/dpdk/spdk_pid62524 00:27:02.817 Removing: /var/run/dpdk/spdk_pid62633 00:27:02.817 Removing: /var/run/dpdk/spdk_pid62686 00:27:02.817 Removing: /var/run/dpdk/spdk_pid62712 00:27:02.817 Removing: /var/run/dpdk/spdk_pid62743 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64674 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64813 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64817 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64829 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64891 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64895 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64912 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64963 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64967 00:27:03.079 Removing: /var/run/dpdk/spdk_pid64979 00:27:03.079 Removing: /var/run/dpdk/spdk_pid65040 00:27:03.079 Removing: /var/run/dpdk/spdk_pid65050 00:27:03.079 Removing: /var/run/dpdk/spdk_pid65062 00:27:03.079 Removing: /var/run/dpdk/spdk_pid66513 00:27:03.079 Removing: /var/run/dpdk/spdk_pid66616 00:27:03.079 Removing: /var/run/dpdk/spdk_pid66745 00:27:03.079 Removing: /var/run/dpdk/spdk_pid66827 00:27:03.079 Removing: /var/run/dpdk/spdk_pid66911 00:27:03.079 Removing: /var/run/dpdk/spdk_pid66990 00:27:03.079 Removing: /var/run/dpdk/spdk_pid67089 00:27:03.079 Removing: /var/run/dpdk/spdk_pid67163 00:27:03.079 Removing: /var/run/dpdk/spdk_pid67304 00:27:03.079 Removing: /var/run/dpdk/spdk_pid67679 00:27:03.079 Removing: /var/run/dpdk/spdk_pid67716 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68136 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68331 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68438 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68542 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68596 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68623 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68932 00:27:03.079 Removing: /var/run/dpdk/spdk_pid68994 00:27:03.079 Removing: /var/run/dpdk/spdk_pid69069 00:27:03.079 Removing: /var/run/dpdk/spdk_pid69464 00:27:03.079 Removing: /var/run/dpdk/spdk_pid69617 00:27:03.079 Removing: /var/run/dpdk/spdk_pid70432 00:27:03.079 Removing: /var/run/dpdk/spdk_pid70569 00:27:03.079 Removing: /var/run/dpdk/spdk_pid70755 00:27:03.079 Removing: /var/run/dpdk/spdk_pid70852 00:27:03.079 Removing: /var/run/dpdk/spdk_pid71160 00:27:03.079 Removing: /var/run/dpdk/spdk_pid71392 00:27:03.079 Removing: /var/run/dpdk/spdk_pid71752 00:27:03.079 Removing: /var/run/dpdk/spdk_pid71956 00:27:03.079 Removing: /var/run/dpdk/spdk_pid72081 00:27:03.079 Removing: /var/run/dpdk/spdk_pid72128 00:27:03.079 Removing: /var/run/dpdk/spdk_pid72391 00:27:03.079 Removing: /var/run/dpdk/spdk_pid72427 00:27:03.079 Removing: /var/run/dpdk/spdk_pid72484 00:27:03.079 Removing: /var/run/dpdk/spdk_pid72767 00:27:03.079 Removing: /var/run/dpdk/spdk_pid73021 00:27:03.079 Removing: /var/run/dpdk/spdk_pid73603 00:27:03.079 Removing: /var/run/dpdk/spdk_pid74228 00:27:03.079 Removing: /var/run/dpdk/spdk_pid74822 00:27:03.079 Removing: /var/run/dpdk/spdk_pid75568 00:27:03.079 Removing: /var/run/dpdk/spdk_pid75732 00:27:03.079 Removing: /var/run/dpdk/spdk_pid75814 00:27:03.079 Removing: /var/run/dpdk/spdk_pid76244 00:27:03.079 Removing: /var/run/dpdk/spdk_pid76302 00:27:03.079 Removing: /var/run/dpdk/spdk_pid76818 00:27:03.079 Removing: /var/run/dpdk/spdk_pid77276 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78025 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78157 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78212 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78280 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78340 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78404 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78613 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78652 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78748 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78809 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78853 00:27:03.079 Removing: /var/run/dpdk/spdk_pid78920 00:27:03.079 Removing: /var/run/dpdk/spdk_pid79051 00:27:03.079 Clean 00:27:03.079 killing process with pid 48186 00:27:03.340 killing process with pid 48191 00:27:03.340 20:20:10 -- common/autotest_common.sh@1446 -- # return 0 00:27:03.340 20:20:10 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:03.340 20:20:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.340 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.340 20:20:10 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:03.340 20:20:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.340 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.340 20:20:10 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:03.340 20:20:10 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:03.340 20:20:10 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:03.340 20:20:10 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:03.340 20:20:10 -- spdk/autotest.sh@383 -- # hostname 00:27:03.340 20:20:10 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:03.602 geninfo: WARNING: invalid characters removed from testname! 00:27:30.194 20:20:33 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:30.194 20:20:37 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:31.136 20:20:38 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:33.113 20:20:40 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:35.654 20:20:42 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:37.031 20:20:44 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:39.577 20:20:47 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:39.577 20:20:47 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:39.577 20:20:47 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:39.577 20:20:47 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:39.577 20:20:47 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:39.577 20:20:47 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:39.577 20:20:47 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:39.577 20:20:47 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:39.577 20:20:47 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:39.577 20:20:47 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:39.577 20:20:47 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:39.577 20:20:47 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:39.577 20:20:47 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:39.577 20:20:47 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:39.577 20:20:47 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:39.577 20:20:47 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:39.577 20:20:47 -- scripts/common.sh@343 -- $ case "$op" in 00:27:39.577 20:20:47 -- scripts/common.sh@344 -- $ : 1 00:27:39.577 20:20:47 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:39.577 20:20:47 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.577 20:20:47 -- scripts/common.sh@364 -- $ decimal 1 00:27:39.577 20:20:47 -- scripts/common.sh@352 -- $ local d=1 00:27:39.577 20:20:47 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:39.577 20:20:47 -- scripts/common.sh@354 -- $ echo 1 00:27:39.577 20:20:47 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:39.577 20:20:47 -- scripts/common.sh@365 -- $ decimal 2 00:27:39.577 20:20:47 -- scripts/common.sh@352 -- $ local d=2 00:27:39.577 20:20:47 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:39.577 20:20:47 -- scripts/common.sh@354 -- $ echo 2 00:27:39.577 20:20:47 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:39.577 20:20:47 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:39.577 20:20:47 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:39.577 20:20:47 -- scripts/common.sh@367 -- $ return 0 00:27:39.577 20:20:47 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.577 20:20:47 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.577 --rc genhtml_branch_coverage=1 00:27:39.577 --rc genhtml_function_coverage=1 00:27:39.577 --rc genhtml_legend=1 00:27:39.577 --rc geninfo_all_blocks=1 00:27:39.577 --rc geninfo_unexecuted_blocks=1 00:27:39.577 00:27:39.577 ' 00:27:39.577 20:20:47 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.577 --rc genhtml_branch_coverage=1 00:27:39.577 --rc genhtml_function_coverage=1 00:27:39.577 --rc genhtml_legend=1 00:27:39.577 --rc geninfo_all_blocks=1 00:27:39.577 --rc geninfo_unexecuted_blocks=1 00:27:39.577 00:27:39.577 ' 00:27:39.577 20:20:47 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.577 --rc genhtml_branch_coverage=1 00:27:39.577 --rc genhtml_function_coverage=1 00:27:39.577 --rc genhtml_legend=1 00:27:39.577 --rc geninfo_all_blocks=1 00:27:39.577 --rc geninfo_unexecuted_blocks=1 00:27:39.577 00:27:39.577 ' 00:27:39.577 20:20:47 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.577 --rc genhtml_branch_coverage=1 00:27:39.577 --rc genhtml_function_coverage=1 00:27:39.577 --rc genhtml_legend=1 00:27:39.577 --rc geninfo_all_blocks=1 00:27:39.577 --rc geninfo_unexecuted_blocks=1 00:27:39.577 00:27:39.577 ' 00:27:39.577 20:20:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:39.577 20:20:47 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:39.577 20:20:47 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.577 20:20:47 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.577 20:20:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.577 20:20:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.577 20:20:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.577 20:20:47 -- paths/export.sh@5 -- $ export PATH 00:27:39.577 20:20:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.578 20:20:47 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:39.578 20:20:47 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:39.578 20:20:47 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734380447.XXXXXX 00:27:39.578 20:20:47 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734380447.RjOwYw 00:27:39.578 20:20:47 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:39.578 20:20:47 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:27:39.578 20:20:47 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:39.578 20:20:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:39.578 20:20:47 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:39.578 20:20:47 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:39.578 20:20:47 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:39.578 20:20:47 -- common/autotest_common.sh@10 -- $ set +x 00:27:39.578 20:20:47 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:27:39.578 20:20:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:39.578 20:20:47 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:39.578 20:20:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:39.578 20:20:47 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:39.578 20:20:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:39.578 20:20:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:39.578 20:20:47 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:39.578 20:20:47 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:39.578 20:20:47 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:39.839 20:20:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:39.839 + [[ -n 5001 ]] 00:27:39.839 + sudo kill 5001 00:27:39.850 [Pipeline] } 00:27:39.865 [Pipeline] // timeout 00:27:39.870 [Pipeline] } 00:27:39.885 [Pipeline] // stage 00:27:39.890 [Pipeline] } 00:27:39.904 [Pipeline] // catchError 00:27:39.913 [Pipeline] stage 00:27:39.915 [Pipeline] { (Stop VM) 00:27:39.927 [Pipeline] sh 00:27:40.212 + vagrant halt 00:27:42.755 ==> default: Halting domain... 00:27:48.062 [Pipeline] sh 00:27:48.346 + vagrant destroy -f 00:27:50.896 ==> default: Removing domain... 00:27:51.482 [Pipeline] sh 00:27:51.768 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:27:51.780 [Pipeline] } 00:27:51.794 [Pipeline] // stage 00:27:51.800 [Pipeline] } 00:27:51.814 [Pipeline] // dir 00:27:51.819 [Pipeline] } 00:27:51.833 [Pipeline] // wrap 00:27:51.840 [Pipeline] } 00:27:51.852 [Pipeline] // catchError 00:27:51.862 [Pipeline] stage 00:27:51.864 [Pipeline] { (Epilogue) 00:27:51.877 [Pipeline] sh 00:27:52.160 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:56.399 [Pipeline] catchError 00:27:56.401 [Pipeline] { 00:27:56.414 [Pipeline] sh 00:27:56.701 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:56.701 Artifacts sizes are good 00:27:56.712 [Pipeline] } 00:27:56.727 [Pipeline] // catchError 00:27:56.738 [Pipeline] archiveArtifacts 00:27:56.745 Archiving artifacts 00:27:56.852 [Pipeline] cleanWs 00:27:56.865 [WS-CLEANUP] Deleting project workspace... 00:27:56.865 [WS-CLEANUP] Deferred wipeout is used... 00:27:56.873 [WS-CLEANUP] done 00:27:56.875 [Pipeline] } 00:27:56.890 [Pipeline] // stage 00:27:56.895 [Pipeline] } 00:27:56.909 [Pipeline] // node 00:27:56.915 [Pipeline] End of Pipeline 00:27:56.964 Finished: SUCCESS